00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3668 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3270 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.084 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.085 The recommended git tool is: git 00:00:00.085 using credential 00000000-0000-0000-0000-000000000002 00:00:00.087 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.134 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.182 Using shallow fetch with depth 1 00:00:00.182 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.182 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.251 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.251 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.855 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.870 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.882 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.883 > git config core.sparsecheckout # timeout=10 00:00:04.893 > git read-tree -mu HEAD # timeout=10 00:00:04.913 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.931 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.931 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:05.037 [Pipeline] Start of Pipeline 00:00:05.062 [Pipeline] library 00:00:05.065 Loading library shm_lib@master 00:00:07.001 Library shm_lib@master is cached. Copying from home. 00:00:07.031 [Pipeline] node 00:00:07.120 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.122 [Pipeline] { 00:00:07.134 [Pipeline] catchError 00:00:07.136 [Pipeline] { 00:00:07.150 [Pipeline] wrap 00:00:07.159 [Pipeline] { 00:00:07.166 [Pipeline] stage 00:00:07.167 [Pipeline] { (Prologue) 00:00:07.347 [Pipeline] sh 00:00:07.623 + logger -p user.info -t JENKINS-CI 00:00:07.643 [Pipeline] echo 00:00:07.644 Node: GP11 00:00:07.652 [Pipeline] sh 00:00:07.949 [Pipeline] setCustomBuildProperty 00:00:07.958 [Pipeline] echo 00:00:07.959 Cleanup processes 00:00:07.963 [Pipeline] sh 00:00:08.293 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.293 1663354 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.309 [Pipeline] sh 00:00:08.592 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.592 ++ grep -v 'sudo pgrep' 00:00:08.592 ++ awk '{print $1}' 00:00:08.592 + sudo kill -9 00:00:08.592 + true 00:00:08.605 [Pipeline] cleanWs 00:00:08.614 [WS-CLEANUP] Deleting project workspace... 00:00:08.614 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.620 [WS-CLEANUP] done 00:00:08.624 [Pipeline] setCustomBuildProperty 00:00:08.635 [Pipeline] sh 00:00:08.912 + sudo git config --global --replace-all safe.directory '*' 00:00:08.989 [Pipeline] httpRequest 00:00:09.009 [Pipeline] echo 00:00:09.010 Sorcerer 10.211.164.101 is alive 00:00:09.016 [Pipeline] httpRequest 00:00:09.019 HttpMethod: GET 00:00:09.020 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.021 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.023 Response Code: HTTP/1.1 200 OK 00:00:09.024 Success: Status code 200 is in the accepted range: 200,404 00:00:09.024 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.917 [Pipeline] sh 00:00:10.200 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.220 [Pipeline] httpRequest 00:00:10.243 [Pipeline] echo 00:00:10.246 Sorcerer 10.211.164.101 is alive 00:00:10.256 [Pipeline] httpRequest 00:00:10.262 HttpMethod: GET 00:00:10.262 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:10.263 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:10.271 Response Code: HTTP/1.1 200 OK 00:00:10.271 Success: Status code 200 is in the accepted range: 200,404 00:00:10.272 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:34.027 [Pipeline] sh 00:00:34.311 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:36.856 [Pipeline] sh 00:00:37.136 + git -C spdk log --oneline -n5 00:00:37.136 719d03c6a sock/uring: only register net impl if supported 00:00:37.137 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:37.137 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:37.137 6c7c1f57e accel: add sequence outstanding stat 00:00:37.137 3bc8e6a26 accel: add utility to put task 00:00:37.157 [Pipeline] withCredentials 00:00:37.168 > git --version # timeout=10 00:00:37.186 > git --version # 'git version 2.39.2' 00:00:37.203 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:37.205 [Pipeline] { 00:00:37.242 [Pipeline] retry 00:00:37.244 [Pipeline] { 00:00:37.266 [Pipeline] sh 00:00:37.550 + git ls-remote http://dpdk.org/git/dpdk main 00:00:38.134 [Pipeline] } 00:00:38.157 [Pipeline] // retry 00:00:38.162 [Pipeline] } 00:00:38.182 [Pipeline] // withCredentials 00:00:38.191 [Pipeline] httpRequest 00:00:38.210 [Pipeline] echo 00:00:38.212 Sorcerer 10.211.164.101 is alive 00:00:38.219 [Pipeline] httpRequest 00:00:38.223 HttpMethod: GET 00:00:38.224 URL: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:38.225 Sending request to url: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:38.233 Response Code: HTTP/1.1 200 OK 00:00:38.233 Success: Status code 200 is in the accepted range: 200,404 00:00:38.234 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:41.323 [Pipeline] sh 00:00:41.607 + tar --no-same-owner -xf dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:42.998 [Pipeline] sh 00:00:43.280 + git -C dpdk log --oneline -n5 00:00:43.280 fa8d2f7f28 version: 24.07-rc2 00:00:43.280 d4bc3c2e01 maintainers: update for cxgbe driver 00:00:43.280 2227c0ed9a maintainers: update for Microsoft drivers 00:00:43.280 8385370337 maintainers: update for Arm 00:00:43.280 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:00:43.293 [Pipeline] } 00:00:43.312 [Pipeline] // stage 00:00:43.325 [Pipeline] stage 00:00:43.327 [Pipeline] { (Prepare) 00:00:43.352 [Pipeline] writeFile 00:00:43.372 [Pipeline] sh 00:00:43.656 + logger -p user.info -t JENKINS-CI 00:00:43.670 [Pipeline] sh 00:00:43.953 + logger -p user.info -t JENKINS-CI 00:00:43.966 [Pipeline] sh 00:00:44.247 + cat autorun-spdk.conf 00:00:44.248 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.248 SPDK_TEST_NVMF=1 00:00:44.248 SPDK_TEST_NVME_CLI=1 00:00:44.248 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.248 SPDK_TEST_NVMF_NICS=e810 00:00:44.248 SPDK_TEST_VFIOUSER=1 00:00:44.248 SPDK_RUN_UBSAN=1 00:00:44.248 NET_TYPE=phy 00:00:44.248 SPDK_TEST_NATIVE_DPDK=main 00:00:44.248 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:44.255 RUN_NIGHTLY=1 00:00:44.261 [Pipeline] readFile 00:00:44.324 [Pipeline] withEnv 00:00:44.326 [Pipeline] { 00:00:44.337 [Pipeline] sh 00:00:44.615 + set -ex 00:00:44.615 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:44.615 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:44.615 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.615 ++ SPDK_TEST_NVMF=1 00:00:44.615 ++ SPDK_TEST_NVME_CLI=1 00:00:44.615 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.615 ++ SPDK_TEST_NVMF_NICS=e810 00:00:44.615 ++ SPDK_TEST_VFIOUSER=1 00:00:44.615 ++ SPDK_RUN_UBSAN=1 00:00:44.615 ++ NET_TYPE=phy 00:00:44.615 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:44.615 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:44.615 ++ RUN_NIGHTLY=1 00:00:44.615 + case $SPDK_TEST_NVMF_NICS in 00:00:44.615 + DRIVERS=ice 00:00:44.615 + [[ tcp == \r\d\m\a ]] 00:00:44.615 + [[ -n ice ]] 00:00:44.615 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:44.615 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:44.615 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:44.615 rmmod: ERROR: Module irdma is not currently loaded 00:00:44.615 rmmod: ERROR: Module i40iw is not currently loaded 00:00:44.615 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:44.615 + true 00:00:44.615 + for D in $DRIVERS 00:00:44.615 + sudo modprobe ice 00:00:44.615 + exit 0 00:00:44.623 [Pipeline] } 00:00:44.671 [Pipeline] // withEnv 00:00:44.676 [Pipeline] } 00:00:44.689 [Pipeline] // stage 00:00:44.699 [Pipeline] catchError 00:00:44.701 [Pipeline] { 00:00:44.715 [Pipeline] timeout 00:00:44.716 Timeout set to expire in 50 min 00:00:44.717 [Pipeline] { 00:00:44.732 [Pipeline] stage 00:00:44.734 [Pipeline] { (Tests) 00:00:44.750 [Pipeline] sh 00:00:45.026 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.026 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.026 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.026 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:45.026 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.026 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.026 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:45.026 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.026 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.026 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.026 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:45.026 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.026 + source /etc/os-release 00:00:45.026 ++ NAME='Fedora Linux' 00:00:45.026 ++ VERSION='38 (Cloud Edition)' 00:00:45.026 ++ ID=fedora 00:00:45.026 ++ VERSION_ID=38 00:00:45.026 ++ VERSION_CODENAME= 00:00:45.026 ++ PLATFORM_ID=platform:f38 00:00:45.026 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:45.026 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:45.026 ++ LOGO=fedora-logo-icon 00:00:45.026 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:45.026 ++ HOME_URL=https://fedoraproject.org/ 00:00:45.026 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:45.026 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:45.026 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:45.026 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:45.026 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:45.026 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:45.026 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:45.026 ++ SUPPORT_END=2024-05-14 00:00:45.026 ++ VARIANT='Cloud Edition' 00:00:45.026 ++ VARIANT_ID=cloud 00:00:45.026 + uname -a 00:00:45.026 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:45.026 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:45.961 Hugepages 00:00:45.961 node hugesize free / total 00:00:45.961 node0 1048576kB 0 / 0 00:00:45.961 node0 2048kB 0 / 0 00:00:45.961 node1 1048576kB 0 / 0 00:00:45.961 node1 2048kB 0 / 0 00:00:45.961 00:00:45.961 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:45.961 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:45.961 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:45.961 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:45.961 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:45.961 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:45.961 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:45.961 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:45.961 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:45.961 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:45.961 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:45.961 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:45.961 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:45.961 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:45.961 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:45.961 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:45.961 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:46.264 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:46.264 + rm -f /tmp/spdk-ld-path 00:00:46.264 + source autorun-spdk.conf 00:00:46.264 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.264 ++ SPDK_TEST_NVMF=1 00:00:46.264 ++ SPDK_TEST_NVME_CLI=1 00:00:46.264 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.264 ++ SPDK_TEST_NVMF_NICS=e810 00:00:46.264 ++ SPDK_TEST_VFIOUSER=1 00:00:46.264 ++ SPDK_RUN_UBSAN=1 00:00:46.264 ++ NET_TYPE=phy 00:00:46.264 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:46.264 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.264 ++ RUN_NIGHTLY=1 00:00:46.264 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:46.264 + [[ -n '' ]] 00:00:46.264 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.264 + for M in /var/spdk/build-*-manifest.txt 00:00:46.264 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:46.264 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:46.264 + for M in /var/spdk/build-*-manifest.txt 00:00:46.264 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:46.264 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:46.264 ++ uname 00:00:46.264 + [[ Linux == \L\i\n\u\x ]] 00:00:46.264 + sudo dmesg -T 00:00:46.264 + sudo dmesg --clear 00:00:46.264 + dmesg_pid=1664059 00:00:46.264 + [[ Fedora Linux == FreeBSD ]] 00:00:46.264 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.264 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.264 + sudo dmesg -Tw 00:00:46.264 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:46.264 + [[ -x /usr/src/fio-static/fio ]] 00:00:46.264 + export FIO_BIN=/usr/src/fio-static/fio 00:00:46.264 + FIO_BIN=/usr/src/fio-static/fio 00:00:46.264 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:46.264 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:46.264 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:46.264 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.264 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.264 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:46.264 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.264 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.264 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:46.264 Test configuration: 00:00:46.264 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.264 SPDK_TEST_NVMF=1 00:00:46.264 SPDK_TEST_NVME_CLI=1 00:00:46.264 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.264 SPDK_TEST_NVMF_NICS=e810 00:00:46.264 SPDK_TEST_VFIOUSER=1 00:00:46.264 SPDK_RUN_UBSAN=1 00:00:46.264 NET_TYPE=phy 00:00:46.264 SPDK_TEST_NATIVE_DPDK=main 00:00:46.264 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.264 RUN_NIGHTLY=1 09:34:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:46.264 09:34:02 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:46.264 09:34:02 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:46.264 09:34:02 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:46.264 09:34:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.264 09:34:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.264 09:34:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.264 09:34:02 -- paths/export.sh@5 -- $ export PATH 00:00:46.264 09:34:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.264 09:34:02 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:46.264 09:34:02 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:46.264 09:34:02 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721028842.XXXXXX 00:00:46.264 09:34:02 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721028842.CltWVd 00:00:46.264 09:34:02 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:46.264 09:34:02 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:00:46.264 09:34:02 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:46.264 09:34:02 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:46.264 09:34:02 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:46.264 09:34:02 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:46.265 09:34:02 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:46.265 09:34:02 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:46.265 09:34:02 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.265 09:34:02 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:46.265 09:34:02 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:46.265 09:34:02 -- pm/common@17 -- $ local monitor 00:00:46.265 09:34:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.265 09:34:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.265 09:34:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.265 09:34:02 -- pm/common@21 -- $ date +%s 00:00:46.265 09:34:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.265 09:34:02 -- pm/common@21 -- $ date +%s 00:00:46.265 09:34:02 -- pm/common@25 -- $ sleep 1 00:00:46.265 09:34:02 -- pm/common@21 -- $ date +%s 00:00:46.265 09:34:02 -- pm/common@21 -- $ date +%s 00:00:46.265 09:34:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721028842 00:00:46.265 09:34:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721028842 00:00:46.265 09:34:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721028842 00:00:46.265 09:34:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721028842 00:00:46.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721028842_collect-vmstat.pm.log 00:00:46.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721028842_collect-cpu-load.pm.log 00:00:46.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721028842_collect-cpu-temp.pm.log 00:00:46.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721028842_collect-bmc-pm.bmc.pm.log 00:00:47.196 09:34:03 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:47.196 09:34:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:47.196 09:34:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:47.196 09:34:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.196 09:34:03 -- spdk/autobuild.sh@16 -- $ date -u 00:00:47.196 Mon Jul 15 07:34:03 AM UTC 2024 00:00:47.196 09:34:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:47.196 v24.09-pre-202-g719d03c6a 00:00:47.196 09:34:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:47.196 09:34:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:47.196 09:34:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:47.196 09:34:03 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:47.196 09:34:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:47.196 09:34:03 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.196 ************************************ 00:00:47.196 START TEST ubsan 00:00:47.196 ************************************ 00:00:47.196 09:34:03 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:47.196 using ubsan 00:00:47.196 00:00:47.196 real 0m0.000s 00:00:47.196 user 0m0.000s 00:00:47.196 sys 0m0.000s 00:00:47.196 09:34:03 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:47.197 09:34:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:47.197 ************************************ 00:00:47.197 END TEST ubsan 00:00:47.197 ************************************ 00:00:47.453 09:34:03 -- common/autotest_common.sh@1142 -- $ return 0 00:00:47.454 09:34:03 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:00:47.454 09:34:03 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:47.454 09:34:03 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:47.454 09:34:03 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:47.454 09:34:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:47.454 09:34:03 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.454 ************************************ 00:00:47.454 START TEST build_native_dpdk 00:00:47.454 ************************************ 00:00:47.454 09:34:04 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:47.454 fa8d2f7f28 version: 24.07-rc2 00:00:47.454 d4bc3c2e01 maintainers: update for cxgbe driver 00:00:47.454 2227c0ed9a maintainers: update for Microsoft drivers 00:00:47.454 8385370337 maintainers: update for Arm 00:00:47.454 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:47.454 09:34:04 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:47.454 patching file config/rte_config.h 00:00:47.454 Hunk #1 succeeded at 70 (offset 11 lines). 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:47.454 09:34:04 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:51.642 The Meson build system 00:00:51.642 Version: 1.3.1 00:00:51.642 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:51.642 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:51.642 Build type: native build 00:00:51.642 Program cat found: YES (/usr/bin/cat) 00:00:51.642 Project name: DPDK 00:00:51.642 Project version: 24.07.0-rc2 00:00:51.642 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:51.642 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:51.642 Host machine cpu family: x86_64 00:00:51.642 Host machine cpu: x86_64 00:00:51.642 Message: ## Building in Developer Mode ## 00:00:51.642 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:51.642 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:51.642 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:51.642 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:00:51.642 Program cat found: YES (/usr/bin/cat) 00:00:51.642 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:51.642 Compiler for C supports arguments -march=native: YES 00:00:51.642 Checking for size of "void *" : 8 00:00:51.642 Checking for size of "void *" : 8 (cached) 00:00:51.642 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:00:51.642 Library m found: YES 00:00:51.642 Library numa found: YES 00:00:51.642 Has header "numaif.h" : YES 00:00:51.642 Library fdt found: NO 00:00:51.642 Library execinfo found: NO 00:00:51.642 Has header "execinfo.h" : YES 00:00:51.642 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:51.642 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:51.642 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:51.642 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:51.642 Run-time dependency openssl found: YES 3.0.9 00:00:51.642 Run-time dependency libpcap found: YES 1.10.4 00:00:51.642 Has header "pcap.h" with dependency libpcap: YES 00:00:51.642 Compiler for C supports arguments -Wcast-qual: YES 00:00:51.642 Compiler for C supports arguments -Wdeprecated: YES 00:00:51.642 Compiler for C supports arguments -Wformat: YES 00:00:51.642 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:51.642 Compiler for C supports arguments -Wformat-security: NO 00:00:51.642 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:51.642 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:51.642 Compiler for C supports arguments -Wnested-externs: YES 00:00:51.642 Compiler for C supports arguments -Wold-style-definition: YES 00:00:51.642 Compiler for C supports arguments -Wpointer-arith: YES 00:00:51.642 Compiler for C supports arguments -Wsign-compare: YES 00:00:51.642 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:51.642 Compiler for C supports arguments -Wundef: YES 00:00:51.642 Compiler for C supports arguments -Wwrite-strings: YES 00:00:51.642 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:51.642 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:51.642 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:51.642 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:51.642 Program objdump found: YES (/usr/bin/objdump) 00:00:51.643 Compiler for C supports arguments -mavx512f: YES 00:00:51.643 Checking if "AVX512 checking" compiles: YES 00:00:51.643 Fetching value of define "__SSE4_2__" : 1 00:00:51.643 Fetching value of define "__AES__" : 1 00:00:51.643 Fetching value of define "__AVX__" : 1 00:00:51.643 Fetching value of define "__AVX2__" : (undefined) 00:00:51.643 Fetching value of define "__AVX512BW__" : (undefined) 00:00:51.643 Fetching value of define "__AVX512CD__" : (undefined) 00:00:51.643 Fetching value of define "__AVX512DQ__" : (undefined) 00:00:51.643 Fetching value of define "__AVX512F__" : (undefined) 00:00:51.643 Fetching value of define "__AVX512VL__" : (undefined) 00:00:51.643 Fetching value of define "__PCLMUL__" : 1 00:00:51.643 Fetching value of define "__RDRND__" : 1 00:00:51.643 Fetching value of define "__RDSEED__" : (undefined) 00:00:51.643 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:51.643 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:51.643 Message: lib/log: Defining dependency "log" 00:00:51.643 Message: lib/kvargs: Defining dependency "kvargs" 00:00:51.643 Message: lib/argparse: Defining dependency "argparse" 00:00:51.643 Message: lib/telemetry: Defining dependency "telemetry" 00:00:51.643 Checking for function "getentropy" : NO 00:00:51.643 Message: lib/eal: Defining dependency "eal" 00:00:51.643 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:00:51.643 Message: lib/ring: Defining dependency "ring" 00:00:51.643 Message: lib/rcu: Defining dependency "rcu" 00:00:51.643 Message: lib/mempool: Defining dependency "mempool" 00:00:51.643 Message: lib/mbuf: Defining dependency "mbuf" 00:00:51.643 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:51.643 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:51.643 Compiler for C supports arguments -mpclmul: YES 00:00:51.643 Compiler for C supports arguments -maes: YES 00:00:51.643 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:51.643 Compiler for C supports arguments -mavx512bw: YES 00:00:51.643 Compiler for C supports arguments -mavx512dq: YES 00:00:51.643 Compiler for C supports arguments -mavx512vl: YES 00:00:51.643 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:51.643 Compiler for C supports arguments -mavx2: YES 00:00:51.643 Compiler for C supports arguments -mavx: YES 00:00:51.643 Message: lib/net: Defining dependency "net" 00:00:51.643 Message: lib/meter: Defining dependency "meter" 00:00:51.643 Message: lib/ethdev: Defining dependency "ethdev" 00:00:51.643 Message: lib/pci: Defining dependency "pci" 00:00:51.643 Message: lib/cmdline: Defining dependency "cmdline" 00:00:51.643 Message: lib/metrics: Defining dependency "metrics" 00:00:51.643 Message: lib/hash: Defining dependency "hash" 00:00:51.643 Message: lib/timer: Defining dependency "timer" 00:00:51.643 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:51.643 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:00:51.643 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:00:51.643 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:00:51.643 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:00:51.643 Message: lib/acl: Defining dependency "acl" 00:00:51.643 Message: lib/bbdev: Defining dependency "bbdev" 00:00:51.643 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:51.643 Run-time dependency libelf found: YES 0.190 00:00:51.643 Message: lib/bpf: Defining dependency "bpf" 00:00:51.643 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:51.643 Message: lib/compressdev: Defining dependency "compressdev" 00:00:51.643 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:51.643 Message: lib/distributor: Defining dependency "distributor" 00:00:51.643 Message: lib/dmadev: Defining dependency "dmadev" 00:00:51.643 Message: lib/efd: Defining dependency "efd" 00:00:51.643 Message: lib/eventdev: Defining dependency "eventdev" 00:00:51.643 Message: lib/dispatcher: Defining dependency "dispatcher" 00:00:51.643 Message: lib/gpudev: Defining dependency "gpudev" 00:00:51.643 Message: lib/gro: Defining dependency "gro" 00:00:51.643 Message: lib/gso: Defining dependency "gso" 00:00:51.643 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:51.643 Message: lib/jobstats: Defining dependency "jobstats" 00:00:51.643 Message: lib/latencystats: Defining dependency "latencystats" 00:00:51.643 Message: lib/lpm: Defining dependency "lpm" 00:00:51.643 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:51.643 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:51.643 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:51.643 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:51.643 Message: lib/member: Defining dependency "member" 00:00:51.643 Message: lib/pcapng: Defining dependency "pcapng" 00:00:51.643 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:51.643 Message: lib/power: Defining dependency "power" 00:00:51.643 Message: lib/rawdev: Defining dependency "rawdev" 00:00:51.643 Message: lib/regexdev: Defining dependency "regexdev" 00:00:51.643 Message: lib/mldev: Defining dependency "mldev" 00:00:51.643 Message: lib/rib: Defining dependency "rib" 00:00:51.643 Message: lib/reorder: Defining dependency "reorder" 00:00:51.643 Message: lib/sched: Defining dependency "sched" 00:00:51.643 Message: lib/security: Defining dependency "security" 00:00:51.643 Message: lib/stack: Defining dependency "stack" 00:00:51.643 Has header "linux/userfaultfd.h" : YES 00:00:51.643 Has header "linux/vduse.h" : YES 00:00:51.643 Message: lib/vhost: Defining dependency "vhost" 00:00:51.643 Message: lib/ipsec: Defining dependency "ipsec" 00:00:51.643 Message: lib/pdcp: Defining dependency "pdcp" 00:00:51.643 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:51.643 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:51.643 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:00:51.643 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:51.643 Message: lib/fib: Defining dependency "fib" 00:00:51.643 Message: lib/port: Defining dependency "port" 00:00:51.643 Message: lib/pdump: Defining dependency "pdump" 00:00:51.643 Message: lib/table: Defining dependency "table" 00:00:51.643 Message: lib/pipeline: Defining dependency "pipeline" 00:00:51.643 Message: lib/graph: Defining dependency "graph" 00:00:51.643 Message: lib/node: Defining dependency "node" 00:00:53.018 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:53.018 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:53.018 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:53.018 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:53.018 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:53.018 Compiler for C supports arguments -Wno-unused-value: YES 00:00:53.018 Compiler for C supports arguments -Wno-format: YES 00:00:53.018 Compiler for C supports arguments -Wno-format-security: YES 00:00:53.018 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:53.018 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:53.018 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:53.018 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:53.018 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:53.018 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:53.018 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:53.018 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:53.018 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:53.018 Has header "sys/epoll.h" : YES 00:00:53.018 Program doxygen found: YES (/usr/bin/doxygen) 00:00:53.018 Configuring doxy-api-html.conf using configuration 00:00:53.018 Configuring doxy-api-man.conf using configuration 00:00:53.018 Program mandb found: YES (/usr/bin/mandb) 00:00:53.018 Program sphinx-build found: NO 00:00:53.018 Configuring rte_build_config.h using configuration 00:00:53.018 Message: 00:00:53.018 ================= 00:00:53.018 Applications Enabled 00:00:53.018 ================= 00:00:53.018 00:00:53.018 apps: 00:00:53.018 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:00:53.018 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:00:53.018 test-pmd, test-regex, test-sad, test-security-perf, 00:00:53.018 00:00:53.018 Message: 00:00:53.018 ================= 00:00:53.018 Libraries Enabled 00:00:53.018 ================= 00:00:53.018 00:00:53.018 libs: 00:00:53.018 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:00:53.018 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:00:53.018 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:00:53.018 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:00:53.018 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:00:53.018 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:00:53.018 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:00:53.018 graph, node, 00:00:53.018 00:00:53.018 Message: 00:00:53.018 =============== 00:00:53.018 Drivers Enabled 00:00:53.018 =============== 00:00:53.018 00:00:53.018 common: 00:00:53.018 00:00:53.018 bus: 00:00:53.018 pci, vdev, 00:00:53.018 mempool: 00:00:53.018 ring, 00:00:53.018 dma: 00:00:53.018 00:00:53.018 net: 00:00:53.018 i40e, 00:00:53.018 raw: 00:00:53.018 00:00:53.018 crypto: 00:00:53.018 00:00:53.018 compress: 00:00:53.018 00:00:53.018 regex: 00:00:53.018 00:00:53.018 ml: 00:00:53.018 00:00:53.018 vdpa: 00:00:53.018 00:00:53.018 event: 00:00:53.018 00:00:53.018 baseband: 00:00:53.018 00:00:53.018 gpu: 00:00:53.018 00:00:53.018 00:00:53.018 Message: 00:00:53.018 ================= 00:00:53.018 Content Skipped 00:00:53.018 ================= 00:00:53.018 00:00:53.018 apps: 00:00:53.018 00:00:53.018 libs: 00:00:53.018 00:00:53.018 drivers: 00:00:53.018 common/cpt: not in enabled drivers build config 00:00:53.018 common/dpaax: not in enabled drivers build config 00:00:53.018 common/iavf: not in enabled drivers build config 00:00:53.018 common/idpf: not in enabled drivers build config 00:00:53.018 common/ionic: not in enabled drivers build config 00:00:53.018 common/mvep: not in enabled drivers build config 00:00:53.018 common/octeontx: not in enabled drivers build config 00:00:53.018 bus/auxiliary: not in enabled drivers build config 00:00:53.018 bus/cdx: not in enabled drivers build config 00:00:53.018 bus/dpaa: not in enabled drivers build config 00:00:53.018 bus/fslmc: not in enabled drivers build config 00:00:53.018 bus/ifpga: not in enabled drivers build config 00:00:53.018 bus/platform: not in enabled drivers build config 00:00:53.018 bus/uacce: not in enabled drivers build config 00:00:53.018 bus/vmbus: not in enabled drivers build config 00:00:53.018 common/cnxk: not in enabled drivers build config 00:00:53.018 common/mlx5: not in enabled drivers build config 00:00:53.018 common/nfp: not in enabled drivers build config 00:00:53.018 common/nitrox: not in enabled drivers build config 00:00:53.018 common/qat: not in enabled drivers build config 00:00:53.018 common/sfc_efx: not in enabled drivers build config 00:00:53.018 mempool/bucket: not in enabled drivers build config 00:00:53.018 mempool/cnxk: not in enabled drivers build config 00:00:53.018 mempool/dpaa: not in enabled drivers build config 00:00:53.018 mempool/dpaa2: not in enabled drivers build config 00:00:53.018 mempool/octeontx: not in enabled drivers build config 00:00:53.018 mempool/stack: not in enabled drivers build config 00:00:53.018 dma/cnxk: not in enabled drivers build config 00:00:53.018 dma/dpaa: not in enabled drivers build config 00:00:53.018 dma/dpaa2: not in enabled drivers build config 00:00:53.018 dma/hisilicon: not in enabled drivers build config 00:00:53.018 dma/idxd: not in enabled drivers build config 00:00:53.018 dma/ioat: not in enabled drivers build config 00:00:53.018 dma/odm: not in enabled drivers build config 00:00:53.018 dma/skeleton: not in enabled drivers build config 00:00:53.018 net/af_packet: not in enabled drivers build config 00:00:53.018 net/af_xdp: not in enabled drivers build config 00:00:53.018 net/ark: not in enabled drivers build config 00:00:53.018 net/atlantic: not in enabled drivers build config 00:00:53.018 net/avp: not in enabled drivers build config 00:00:53.018 net/axgbe: not in enabled drivers build config 00:00:53.018 net/bnx2x: not in enabled drivers build config 00:00:53.018 net/bnxt: not in enabled drivers build config 00:00:53.018 net/bonding: not in enabled drivers build config 00:00:53.018 net/cnxk: not in enabled drivers build config 00:00:53.018 net/cpfl: not in enabled drivers build config 00:00:53.018 net/cxgbe: not in enabled drivers build config 00:00:53.018 net/dpaa: not in enabled drivers build config 00:00:53.018 net/dpaa2: not in enabled drivers build config 00:00:53.018 net/e1000: not in enabled drivers build config 00:00:53.018 net/ena: not in enabled drivers build config 00:00:53.018 net/enetc: not in enabled drivers build config 00:00:53.019 net/enetfec: not in enabled drivers build config 00:00:53.019 net/enic: not in enabled drivers build config 00:00:53.019 net/failsafe: not in enabled drivers build config 00:00:53.019 net/fm10k: not in enabled drivers build config 00:00:53.019 net/gve: not in enabled drivers build config 00:00:53.019 net/hinic: not in enabled drivers build config 00:00:53.019 net/hns3: not in enabled drivers build config 00:00:53.019 net/iavf: not in enabled drivers build config 00:00:53.019 net/ice: not in enabled drivers build config 00:00:53.019 net/idpf: not in enabled drivers build config 00:00:53.019 net/igc: not in enabled drivers build config 00:00:53.019 net/ionic: not in enabled drivers build config 00:00:53.019 net/ipn3ke: not in enabled drivers build config 00:00:53.019 net/ixgbe: not in enabled drivers build config 00:00:53.019 net/mana: not in enabled drivers build config 00:00:53.019 net/memif: not in enabled drivers build config 00:00:53.019 net/mlx4: not in enabled drivers build config 00:00:53.019 net/mlx5: not in enabled drivers build config 00:00:53.019 net/mvneta: not in enabled drivers build config 00:00:53.019 net/mvpp2: not in enabled drivers build config 00:00:53.019 net/netvsc: not in enabled drivers build config 00:00:53.019 net/nfb: not in enabled drivers build config 00:00:53.019 net/nfp: not in enabled drivers build config 00:00:53.019 net/ngbe: not in enabled drivers build config 00:00:53.019 net/null: not in enabled drivers build config 00:00:53.019 net/octeontx: not in enabled drivers build config 00:00:53.019 net/octeon_ep: not in enabled drivers build config 00:00:53.019 net/pcap: not in enabled drivers build config 00:00:53.019 net/pfe: not in enabled drivers build config 00:00:53.019 net/qede: not in enabled drivers build config 00:00:53.019 net/ring: not in enabled drivers build config 00:00:53.019 net/sfc: not in enabled drivers build config 00:00:53.019 net/softnic: not in enabled drivers build config 00:00:53.019 net/tap: not in enabled drivers build config 00:00:53.019 net/thunderx: not in enabled drivers build config 00:00:53.019 net/txgbe: not in enabled drivers build config 00:00:53.019 net/vdev_netvsc: not in enabled drivers build config 00:00:53.019 net/vhost: not in enabled drivers build config 00:00:53.019 net/virtio: not in enabled drivers build config 00:00:53.019 net/vmxnet3: not in enabled drivers build config 00:00:53.019 raw/cnxk_bphy: not in enabled drivers build config 00:00:53.019 raw/cnxk_gpio: not in enabled drivers build config 00:00:53.019 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:53.019 raw/ifpga: not in enabled drivers build config 00:00:53.019 raw/ntb: not in enabled drivers build config 00:00:53.019 raw/skeleton: not in enabled drivers build config 00:00:53.019 crypto/armv8: not in enabled drivers build config 00:00:53.019 crypto/bcmfs: not in enabled drivers build config 00:00:53.019 crypto/caam_jr: not in enabled drivers build config 00:00:53.019 crypto/ccp: not in enabled drivers build config 00:00:53.019 crypto/cnxk: not in enabled drivers build config 00:00:53.019 crypto/dpaa_sec: not in enabled drivers build config 00:00:53.019 crypto/dpaa2_sec: not in enabled drivers build config 00:00:53.019 crypto/ionic: not in enabled drivers build config 00:00:53.019 crypto/ipsec_mb: not in enabled drivers build config 00:00:53.019 crypto/mlx5: not in enabled drivers build config 00:00:53.019 crypto/mvsam: not in enabled drivers build config 00:00:53.019 crypto/nitrox: not in enabled drivers build config 00:00:53.019 crypto/null: not in enabled drivers build config 00:00:53.019 crypto/octeontx: not in enabled drivers build config 00:00:53.019 crypto/openssl: not in enabled drivers build config 00:00:53.019 crypto/scheduler: not in enabled drivers build config 00:00:53.019 crypto/uadk: not in enabled drivers build config 00:00:53.019 crypto/virtio: not in enabled drivers build config 00:00:53.019 compress/isal: not in enabled drivers build config 00:00:53.019 compress/mlx5: not in enabled drivers build config 00:00:53.019 compress/nitrox: not in enabled drivers build config 00:00:53.019 compress/octeontx: not in enabled drivers build config 00:00:53.019 compress/uadk: not in enabled drivers build config 00:00:53.019 compress/zlib: not in enabled drivers build config 00:00:53.019 regex/mlx5: not in enabled drivers build config 00:00:53.019 regex/cn9k: not in enabled drivers build config 00:00:53.019 ml/cnxk: not in enabled drivers build config 00:00:53.019 vdpa/ifc: not in enabled drivers build config 00:00:53.019 vdpa/mlx5: not in enabled drivers build config 00:00:53.019 vdpa/nfp: not in enabled drivers build config 00:00:53.019 vdpa/sfc: not in enabled drivers build config 00:00:53.019 event/cnxk: not in enabled drivers build config 00:00:53.019 event/dlb2: not in enabled drivers build config 00:00:53.019 event/dpaa: not in enabled drivers build config 00:00:53.019 event/dpaa2: not in enabled drivers build config 00:00:53.019 event/dsw: not in enabled drivers build config 00:00:53.019 event/opdl: not in enabled drivers build config 00:00:53.019 event/skeleton: not in enabled drivers build config 00:00:53.019 event/sw: not in enabled drivers build config 00:00:53.019 event/octeontx: not in enabled drivers build config 00:00:53.019 baseband/acc: not in enabled drivers build config 00:00:53.019 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:53.019 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:53.019 baseband/la12xx: not in enabled drivers build config 00:00:53.019 baseband/null: not in enabled drivers build config 00:00:53.019 baseband/turbo_sw: not in enabled drivers build config 00:00:53.019 gpu/cuda: not in enabled drivers build config 00:00:53.019 00:00:53.019 00:00:53.019 Build targets in project: 224 00:00:53.019 00:00:53.019 DPDK 24.07.0-rc2 00:00:53.019 00:00:53.019 User defined options 00:00:53.019 libdir : lib 00:00:53.019 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:53.019 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:53.019 c_link_args : 00:00:53.019 enable_docs : false 00:00:53.019 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:53.019 enable_kmods : false 00:00:53.019 machine : native 00:00:53.019 tests : false 00:00:53.019 00:00:53.019 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:53.019 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:53.019 09:34:09 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:00:53.279 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:53.279 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:53.279 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:53.279 [3/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:53.279 [4/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:53.279 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:53.279 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:53.279 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:53.279 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:53.279 [9/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:53.279 [10/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:53.541 [11/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:53.541 [12/723] Linking static target lib/librte_kvargs.a 00:00:53.541 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:53.541 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:53.541 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:53.541 [16/723] Linking static target lib/librte_log.a 00:00:53.802 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:00:53.802 [18/723] Linking static target lib/librte_argparse.a 00:00:53.802 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.061 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.323 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:54.323 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:54.323 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:54.323 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:54.323 [25/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.323 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:54.323 [27/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:54.323 [28/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:54.323 [29/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:54.323 [30/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:54.323 [31/723] Linking target lib/librte_log.so.24.2 00:00:54.323 [32/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:54.323 [33/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:54.323 [34/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:54.323 [35/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:54.323 [36/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:54.584 [37/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:54.584 [38/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:54.584 [39/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:54.584 [40/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:54.584 [41/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:54.584 [42/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:54.584 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:54.584 [44/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:54.584 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:54.584 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:54.585 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:54.585 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:54.585 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:54.585 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:54.585 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:54.585 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:54.585 [53/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:54.585 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:54.585 [55/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:00:54.585 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:54.585 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:54.585 [58/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:54.585 [59/723] Linking target lib/librte_kvargs.so.24.2 00:00:54.585 [60/723] Linking target lib/librte_argparse.so.24.2 00:00:54.585 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:54.846 [62/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:54.846 [63/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:54.847 [64/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:00:54.847 [65/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:54.847 [66/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:55.111 [67/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:55.111 [68/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:55.111 [69/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:55.111 [70/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:55.111 [71/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:55.371 [72/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:55.371 [73/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:55.371 [74/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:55.371 [75/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:55.371 [76/723] Linking static target lib/librte_pci.a 00:00:55.371 [77/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:55.371 [78/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:55.371 [79/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:55.371 [80/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:00:55.632 [81/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:55.632 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:55.632 [83/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:55.632 [84/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:55.632 [85/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:55.632 [86/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:55.632 [87/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:55.632 [88/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:55.632 [89/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:55.632 [90/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:55.632 [91/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:55.632 [92/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:55.632 [93/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:55.894 [94/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:55.894 [95/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:55.894 [96/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:55.894 [97/723] Linking static target lib/librte_ring.a 00:00:55.894 [98/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:55.894 [99/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:55.894 [100/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:55.894 [101/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:55.894 [102/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:55.894 [103/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:55.894 [104/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:55.894 [105/723] Linking static target lib/librte_meter.a 00:00:55.894 [106/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:55.894 [107/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.894 [108/723] Linking static target lib/librte_telemetry.a 00:00:55.894 [109/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:55.894 [110/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:55.894 [111/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:55.894 [112/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:55.894 [113/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:55.894 [114/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:55.894 [115/723] Linking static target lib/librte_net.a 00:00:56.157 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:56.157 [117/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:56.157 [118/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.157 [119/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.157 [120/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:56.157 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:56.157 [122/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:56.423 [123/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:56.423 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:56.423 [125/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.423 [126/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:00:56.423 [127/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:56.423 [128/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.685 [129/723] Linking target lib/librte_telemetry.so.24.2 00:00:56.685 [130/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:56.685 [131/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:56.685 [132/723] Linking static target lib/librte_mempool.a 00:00:56.685 [133/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:56.685 [134/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:56.685 [135/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:56.685 [136/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:56.685 [137/723] Linking static target lib/librte_cmdline.a 00:00:56.685 [138/723] Linking static target lib/librte_eal.a 00:00:56.685 [139/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:00:56.944 [140/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:00:56.944 [141/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:56.944 [142/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:00:56.944 [143/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:56.944 [144/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:56.944 [145/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:56.944 [146/723] Linking static target lib/librte_cfgfile.a 00:00:56.944 [147/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:56.944 [148/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:56.944 [149/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:56.944 [150/723] Linking static target lib/librte_metrics.a 00:00:57.207 [151/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:57.207 [152/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:57.207 [153/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:57.207 [154/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:57.207 [155/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:57.207 [156/723] Linking static target lib/librte_rcu.a 00:00:57.469 [157/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:57.469 [158/723] Linking static target lib/librte_bitratestats.a 00:00:57.469 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:57.469 [160/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:57.469 [161/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.469 [162/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:57.469 [163/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:00:57.469 [164/723] Linking static target lib/librte_mbuf.a 00:00:57.469 [165/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:57.469 [166/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:57.730 [167/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:57.730 [168/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.730 [169/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:57.730 [170/723] Linking static target lib/librte_timer.a 00:00:57.730 [171/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:57.730 [172/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.730 [173/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.730 [174/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.730 [175/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:57.730 [176/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:57.992 [177/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:57.992 [178/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:57.992 [179/723] Linking static target lib/librte_bbdev.a 00:00:57.992 [180/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:57.992 [181/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.992 [182/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:57.992 [183/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:57.992 [184/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:00:57.992 [185/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:57.992 [186/723] Linking static target lib/librte_compressdev.a 00:00:58.258 [187/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:00:58.258 [188/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:00:58.258 [189/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:58.258 [190/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.258 [191/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:58.258 [192/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:58.519 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.779 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:58.779 [195/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:00:58.779 [196/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.779 [197/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.779 [198/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:00:58.779 [199/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:00:58.779 [200/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:58.779 [201/723] Linking static target lib/librte_distributor.a 00:00:58.779 [202/723] Linking static target lib/librte_dmadev.a 00:00:59.039 [203/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:00:59.039 [204/723] Linking static target lib/librte_bpf.a 00:00:59.039 [205/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:00:59.039 [206/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:59.039 [207/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:00:59.039 [208/723] Linking static target lib/librte_dispatcher.a 00:00:59.039 [209/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:59.039 [210/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:59.298 [211/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:00:59.298 [212/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:59.298 [213/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:59.298 [214/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:59.298 [215/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.298 [216/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:59.298 [217/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:59.298 [218/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:59.298 [219/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:59.298 [220/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:59.298 [221/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:00:59.298 [222/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:59.298 [223/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:00:59.298 [224/723] Linking static target lib/librte_gpudev.a 00:00:59.298 [225/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:59.298 [226/723] Linking static target lib/librte_gro.a 00:00:59.562 [227/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:59.562 [228/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:00:59.562 [229/723] Linking static target lib/librte_jobstats.a 00:00:59.562 [230/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.562 [231/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:59.562 [232/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:00:59.563 [233/723] Linking static target lib/librte_gso.a 00:00:59.563 [234/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.563 [235/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:00:59.824 [236/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:00:59.824 [237/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.824 [238/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:59.824 [239/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:59.824 [240/723] Linking static target lib/librte_latencystats.a 00:00:59.824 [241/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.824 [242/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:00:59.824 [243/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.824 [244/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:00:59.824 [245/723] Linking static target lib/librte_ip_frag.a 00:01:00.087 [246/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.087 [247/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:00.087 [248/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:00.088 [249/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:00.088 [250/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:00.088 [251/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:00.088 [252/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:00.088 [253/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:00.088 [254/723] Linking static target lib/librte_efd.a 00:01:00.088 [255/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:00.088 [256/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.347 [257/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.347 [258/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:00.347 [259/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:00.347 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:00.347 [261/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:00.634 [262/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.634 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:00.634 [264/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:00.634 [265/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:00.634 [266/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:00.634 [267/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.634 [268/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:00.893 [269/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:00.893 [270/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:00.893 [271/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:00.893 [272/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:00.893 [273/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:00.893 [274/723] Linking static target lib/librte_regexdev.a 00:01:00.893 [275/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:00.893 [276/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:00.893 [277/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:00.893 [278/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:00.893 [279/723] Linking static target lib/librte_pcapng.a 00:01:01.153 [280/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:01.153 [281/723] Linking static target lib/librte_rawdev.a 00:01:01.153 [282/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:01.153 [283/723] Linking static target lib/librte_power.a 00:01:01.153 [284/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:01.153 [285/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:01.153 [286/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:01.153 [287/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:01.153 [288/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:01.153 [289/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:01.153 [290/723] Linking static target lib/librte_mldev.a 00:01:01.153 [291/723] Linking static target lib/librte_lpm.a 00:01:01.153 [292/723] Linking static target lib/librte_stack.a 00:01:01.420 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:01.420 [294/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.420 [295/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:01.420 [296/723] Linking static target lib/acl/libavx2_tmp.a 00:01:01.420 [297/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:01.420 [298/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:01.420 [299/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:01.420 [300/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:01.420 [301/723] Linking static target lib/librte_reorder.a 00:01:01.683 [302/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.683 [303/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:01.683 [304/723] Linking static target lib/librte_security.a 00:01:01.683 [305/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:01.683 [306/723] Linking static target lib/librte_cryptodev.a 00:01:01.683 [307/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:01.683 [308/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:01.683 [309/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:01.683 [310/723] Linking static target lib/librte_hash.a 00:01:01.683 [311/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.683 [312/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.948 [313/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:01.948 [314/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:01.948 [315/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:01.948 [316/723] Linking static target lib/librte_rib.a 00:01:01.948 [317/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.948 [318/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:01.948 [319/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:02.213 [320/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.213 [321/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:02.213 [322/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:02.213 [323/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.213 [324/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:02.213 [325/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:02.213 [326/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:02.213 [327/723] Linking static target lib/acl/libavx512_tmp.a 00:01:02.213 [328/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:02.213 [329/723] Linking static target lib/librte_acl.a 00:01:02.213 [330/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:02.213 [331/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:02.213 [332/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:02.213 [333/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.481 [334/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:02.481 [335/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:02.481 [336/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:02.481 [337/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:02.481 [338/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:02.743 [339/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:02.743 [340/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.743 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.743 [342/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.743 [343/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:03.003 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:03.289 [345/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:03.289 [346/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:03.289 [347/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:03.289 [348/723] Linking static target lib/librte_eventdev.a 00:01:03.289 [349/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:03.289 [350/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:03.569 [351/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:03.569 [352/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:03.569 [353/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:03.569 [354/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:03.569 [355/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:03.569 [356/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:03.569 [357/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.569 [358/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:03.569 [359/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:03.569 [360/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:03.569 [361/723] Linking static target lib/librte_member.a 00:01:03.569 [362/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:03.569 [363/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.569 [364/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:03.836 [365/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:03.836 [366/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:03.836 [367/723] Linking static target lib/librte_fib.a 00:01:03.836 [368/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:03.836 [369/723] Linking static target lib/librte_sched.a 00:01:03.836 [370/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:03.836 [371/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:03.836 [372/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:03.836 [373/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:03.836 [374/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:03.836 [375/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:04.095 [376/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:04.095 [377/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:04.095 [378/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:04.095 [379/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:04.095 [380/723] Linking static target lib/librte_ethdev.a 00:01:04.095 [381/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:04.095 [382/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.095 [383/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:04.357 [384/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:04.357 [385/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:04.357 [386/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.357 [387/723] Linking static target lib/librte_ipsec.a 00:01:04.357 [388/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.357 [389/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:04.357 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:04.620 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:04.620 [392/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:04.620 [393/723] Linking static target lib/librte_pdump.a 00:01:04.620 [394/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:04.888 [395/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:04.888 [396/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:04.888 [397/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:04.888 [398/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:04.888 [399/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.888 [400/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:04.888 [401/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:04.888 [402/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:04.888 [403/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:04.888 [404/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:04.888 [405/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:05.148 [406/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:05.148 [407/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:05.148 [408/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:05.148 [409/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.148 [410/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:05.148 [411/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:05.148 [412/723] Linking static target lib/librte_pdcp.a 00:01:05.148 [413/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:05.408 [414/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:05.408 [415/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:05.408 [416/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:05.408 [417/723] Linking static target lib/librte_table.a 00:01:05.408 [418/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:05.408 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:05.408 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:05.671 [421/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:05.671 [422/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:05.671 [423/723] Linking static target lib/librte_graph.a 00:01:05.671 [424/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.933 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:05.933 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:05.933 [427/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:05.933 [428/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:05.933 [429/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:05.933 [430/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:05.933 [431/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:06.197 [432/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:06.197 [433/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:06.197 [434/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:06.197 [435/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:06.197 [436/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:06.197 [437/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:06.197 [438/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:06.197 [439/723] Linking static target lib/librte_port.a 00:01:06.459 [440/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:06.459 [441/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:06.459 [442/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:06.459 [443/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.459 [444/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:06.719 [445/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:06.719 [446/723] Linking static target drivers/librte_bus_vdev.a 00:01:06.719 [447/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:06.719 [448/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:06.719 [449/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.719 [450/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:06.719 [451/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.719 [452/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:06.719 [453/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:06.719 [454/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:06.982 [455/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:06.982 [456/723] Linking static target drivers/librte_bus_pci.a 00:01:06.982 [457/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:06.982 [458/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:06.982 [459/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:06.982 [460/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:06.982 [461/723] Linking static target lib/librte_node.a 00:01:06.982 [462/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.982 [463/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.982 [464/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:06.982 [465/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:07.246 [466/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:07.246 [467/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:07.246 [468/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:07.246 [469/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:07.246 [470/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:07.246 [471/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:07.246 [472/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:07.246 [473/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:07.246 [474/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:07.246 [475/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:07.507 [476/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:07.507 [477/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:07.507 [478/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.776 [479/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:07.776 [480/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.776 [481/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:07.776 [482/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:07.777 [483/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:07.777 [484/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:07.777 [485/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:07.777 [486/723] Linking static target drivers/librte_mempool_ring.a 00:01:07.777 [487/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:07.777 [488/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:07.777 [489/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.036 [490/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:08.036 [491/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:08.036 [492/723] Linking target lib/librte_eal.so.24.2 00:01:08.036 [493/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:08.036 [494/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:08.036 [495/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:08.036 [496/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:08.298 [497/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:08.298 [498/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:08.298 [499/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:08.298 [500/723] Linking target lib/librte_ring.so.24.2 00:01:08.298 [501/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:08.298 [502/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:08.298 [503/723] Linking target lib/librte_meter.so.24.2 00:01:08.298 [504/723] Linking target lib/librte_pci.so.24.2 00:01:08.298 [505/723] Linking target lib/librte_timer.so.24.2 00:01:08.298 [506/723] Linking target lib/librte_acl.so.24.2 00:01:08.558 [507/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:08.558 [508/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:08.558 [509/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:08.558 [510/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:08.558 [511/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:08.558 [512/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:08.558 [513/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:08.558 [514/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:08.558 [515/723] Linking target lib/librte_cfgfile.so.24.2 00:01:08.559 [516/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:08.559 [517/723] Linking target lib/librte_rcu.so.24.2 00:01:08.559 [518/723] Linking target lib/librte_mempool.so.24.2 00:01:08.559 [519/723] Linking target lib/librte_dmadev.so.24.2 00:01:08.559 [520/723] Linking target lib/librte_jobstats.so.24.2 00:01:08.559 [521/723] Linking target lib/librte_stack.so.24.2 00:01:08.559 [522/723] Linking target lib/librte_rawdev.so.24.2 00:01:08.559 [523/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:08.559 [524/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:08.559 [525/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:08.824 [526/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:08.824 [527/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:08.824 [528/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:08.824 [529/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:08.824 [530/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:08.824 [531/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:08.824 [532/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:08.824 [533/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:08.824 [534/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:09.085 [535/723] Linking target lib/librte_mbuf.so.24.2 00:01:09.085 [536/723] Linking target lib/librte_rib.so.24.2 00:01:09.085 [537/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:09.085 [538/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:09.085 [539/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:09.085 [540/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:09.085 [541/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:09.085 [542/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:09.085 [543/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:09.085 [544/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:09.346 [545/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:09.346 [546/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:09.346 [547/723] Linking target lib/librte_bbdev.so.24.2 00:01:09.346 [548/723] Linking target lib/librte_net.so.24.2 00:01:09.346 [549/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:09.346 [550/723] Linking target lib/librte_compressdev.so.24.2 00:01:09.346 [551/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:09.346 [552/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:09.346 [553/723] Linking target lib/librte_distributor.so.24.2 00:01:09.346 [554/723] Linking target lib/librte_cryptodev.so.24.2 00:01:09.346 [555/723] Linking target lib/librte_gpudev.so.24.2 00:01:09.346 [556/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:09.346 [557/723] Linking target lib/librte_regexdev.so.24.2 00:01:09.346 [558/723] Linking target lib/librte_reorder.so.24.2 00:01:09.346 [559/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:09.347 [560/723] Linking target lib/librte_mldev.so.24.2 00:01:09.347 [561/723] Linking target lib/librte_sched.so.24.2 00:01:09.347 [562/723] Linking target lib/librte_fib.so.24.2 00:01:09.347 [563/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:09.347 [564/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:09.347 [565/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:09.611 [566/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:09.611 [567/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:09.611 [568/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:09.611 [569/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:09.611 [570/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:09.611 [571/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:09.611 [572/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:09.611 [573/723] Linking target lib/librte_cmdline.so.24.2 00:01:09.611 [574/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:09.611 [575/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:09.611 [576/723] Linking target lib/librte_hash.so.24.2 00:01:09.611 [577/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:09.611 [578/723] Linking target lib/librte_security.so.24.2 00:01:09.611 [579/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:09.611 [580/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:09.611 [581/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:09.872 [582/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:09.872 [583/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:09.872 [584/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:09.872 [585/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:09.872 [586/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:09.872 [587/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:09.872 [588/723] Linking target lib/librte_efd.so.24.2 00:01:09.872 [589/723] Linking target lib/librte_lpm.so.24.2 00:01:09.872 [590/723] Linking target lib/librte_member.so.24.2 00:01:09.872 [591/723] Linking target lib/librte_ipsec.so.24.2 00:01:09.872 [592/723] Linking target lib/librte_pdcp.so.24.2 00:01:10.132 [593/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:10.132 [594/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:10.132 [595/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:10.132 [596/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:10.132 [597/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:10.395 [598/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:10.395 [599/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:10.395 [600/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:10.395 [601/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:10.662 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:10.662 [603/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:10.662 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:10.662 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:10.662 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:10.662 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:10.662 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:10.921 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:10.921 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:10.921 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:10.921 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:10.921 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:10.921 [614/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:10.921 [615/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:10.921 [616/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:11.182 [617/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:11.182 [618/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:11.182 [619/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:11.182 [620/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:11.182 [621/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:11.182 [622/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:11.441 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:11.700 [624/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:11.700 [625/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:11.700 [626/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:11.700 [627/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:11.700 [628/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:11.700 [629/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:11.700 [630/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:11.700 [631/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:11.958 [632/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:11.958 [633/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:11.959 [634/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:11.959 [635/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.959 [636/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:11.959 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:11.959 [638/723] Linking target lib/librte_ethdev.so.24.2 00:01:11.959 [639/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:11.959 [640/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:11.959 [641/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:12.217 [642/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:12.217 [643/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:12.217 [644/723] Linking target lib/librte_gso.so.24.2 00:01:12.217 [645/723] Linking target lib/librte_bpf.so.24.2 00:01:12.217 [646/723] Linking target lib/librte_pcapng.so.24.2 00:01:12.217 [647/723] Linking target lib/librte_metrics.so.24.2 00:01:12.217 [648/723] Linking target lib/librte_ip_frag.so.24.2 00:01:12.217 [649/723] Linking target lib/librte_gro.so.24.2 00:01:12.217 [650/723] Linking target lib/librte_power.so.24.2 00:01:12.217 [651/723] Linking target lib/librte_eventdev.so.24.2 00:01:12.217 [652/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:12.476 [653/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:12.476 [654/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:12.476 [655/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:12.476 [656/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:12.476 [657/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:12.476 [658/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:12.476 [659/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:12.476 [660/723] Linking target lib/librte_latencystats.so.24.2 00:01:12.476 [661/723] Linking target lib/librte_graph.so.24.2 00:01:12.476 [662/723] Linking target lib/librte_bitratestats.so.24.2 00:01:12.476 [663/723] Linking target lib/librte_pdump.so.24.2 00:01:12.476 [664/723] Linking target lib/librte_dispatcher.so.24.2 00:01:12.476 [665/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:12.476 [666/723] Linking target lib/librte_port.so.24.2 00:01:12.476 [667/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:12.735 [668/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:12.735 [669/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:12.735 [670/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:12.735 [671/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:12.735 [672/723] Linking target lib/librte_node.so.24.2 00:01:12.735 [673/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:12.735 [674/723] Linking target lib/librte_table.so.24.2 00:01:12.992 [675/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:12.992 [676/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:12.992 [677/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:12.992 [678/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:13.250 [679/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:13.509 [680/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:13.509 [681/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:13.767 [682/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:13.767 [683/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:14.025 [684/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:14.025 [685/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:14.025 [686/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:14.025 [687/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:14.025 [688/723] Linking static target drivers/librte_net_i40e.a 00:01:14.284 [689/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:14.541 [690/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.800 [691/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:14.800 [692/723] Linking target drivers/librte_net_i40e.so.24.2 00:01:15.060 [693/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:15.324 [694/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:16.258 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:24.367 [696/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:24.367 [697/723] Linking static target lib/librte_pipeline.a 00:01:24.933 [698/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:24.933 [699/723] Linking static target lib/librte_vhost.a 00:01:25.500 [700/723] Linking target app/dpdk-dumpcap 00:01:25.500 [701/723] Linking target app/dpdk-pdump 00:01:25.500 [702/723] Linking target app/dpdk-test-cmdline 00:01:25.500 [703/723] Linking target app/dpdk-proc-info 00:01:25.500 [704/723] Linking target app/dpdk-test-pipeline 00:01:25.500 [705/723] Linking target app/dpdk-test-dma-perf 00:01:25.500 [706/723] Linking target app/dpdk-test-acl 00:01:25.500 [707/723] Linking target app/dpdk-test-bbdev 00:01:25.500 [708/723] Linking target app/dpdk-test-flow-perf 00:01:25.500 [709/723] Linking target app/dpdk-test-crypto-perf 00:01:25.500 [710/723] Linking target app/dpdk-test-mldev 00:01:25.500 [711/723] Linking target app/dpdk-test-fib 00:01:25.500 [712/723] Linking target app/dpdk-test-regex 00:01:25.500 [713/723] Linking target app/dpdk-test-gpudev 00:01:25.500 [714/723] Linking target app/dpdk-test-sad 00:01:25.500 [715/723] Linking target app/dpdk-test-security-perf 00:01:25.500 [716/723] Linking target app/dpdk-test-compress-perf 00:01:25.500 [717/723] Linking target app/dpdk-graph 00:01:25.500 [718/723] Linking target app/dpdk-test-eventdev 00:01:25.500 [719/723] Linking target app/dpdk-testpmd 00:01:25.758 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.038 [721/723] Linking target lib/librte_vhost.so.24.2 00:01:26.974 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.974 [723/723] Linking target lib/librte_pipeline.so.24.2 00:01:26.974 09:34:43 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:01:26.974 09:34:43 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:26.974 09:34:43 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:01:27.232 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:27.232 [0/1] Installing files. 00:01:27.494 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:27.495 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:27.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.502 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.502 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.502 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:27.764 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:27.764 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:27.764 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.764 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:27.764 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.765 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.766 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.767 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:28.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:28.026 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:28.026 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:28.026 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:28.026 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:28.026 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:01:28.026 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:01:28.026 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:28.026 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:28.026 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:28.026 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:28.026 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:28.026 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:28.026 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:28.026 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:28.026 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:28.026 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:28.026 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:28.026 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:28.026 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:28.026 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:28.026 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:28.026 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:28.026 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:28.026 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:28.026 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:28.026 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:28.026 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:28.026 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:28.026 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:28.026 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:28.026 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:28.026 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:28.026 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:28.026 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:28.026 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:28.026 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:28.026 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:28.026 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:28.026 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:28.026 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:28.026 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:28.026 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:28.026 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:28.026 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:28.026 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:28.026 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:28.026 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:28.026 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:28.026 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:28.026 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:28.026 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:28.026 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:28.026 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:28.026 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:28.026 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:28.027 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:28.027 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:28.027 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:28.027 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:28.027 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:28.027 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:28.027 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:28.027 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:28.027 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:28.027 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:28.027 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:28.027 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:28.027 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:28.027 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:28.027 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:28.027 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:28.027 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:28.027 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:28.027 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:28.027 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:28.027 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:28.027 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:28.027 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:28.027 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:28.027 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:28.027 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:28.027 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:28.027 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:28.027 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:28.027 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:28.027 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:28.027 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:28.027 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:28.027 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:28.027 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:28.027 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:28.027 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:28.027 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:28.027 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:28.027 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:28.027 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:28.027 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:28.027 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:28.027 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:28.027 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:28.027 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:28.027 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:28.027 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:28.027 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:28.027 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:28.027 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:28.027 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:28.027 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:28.027 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:28.027 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:28.027 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:28.027 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:28.027 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:28.027 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:28.027 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:01:28.027 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:01:28.027 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:01:28.027 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:01:28.027 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:01:28.027 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:01:28.027 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:01:28.027 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:01:28.027 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:01:28.027 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:01:28.027 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:01:28.027 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:01:28.027 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:01:28.027 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:01:28.027 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:01:28.027 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:01:28.027 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:01:28.027 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:01:28.027 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:01:28.027 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:01:28.027 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:01:28.027 09:34:44 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:01:28.027 09:34:44 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.027 00:01:28.027 real 0m40.584s 00:01:28.027 user 13m56.574s 00:01:28.027 sys 2m0.673s 00:01:28.027 09:34:44 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:28.027 09:34:44 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:28.027 ************************************ 00:01:28.027 END TEST build_native_dpdk 00:01:28.027 ************************************ 00:01:28.027 09:34:44 -- common/autotest_common.sh@1142 -- $ return 0 00:01:28.027 09:34:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.027 09:34:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.027 09:34:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.027 09:34:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.027 09:34:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.027 09:34:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.027 09:34:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.027 09:34:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:28.027 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:28.027 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:28.027 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:28.286 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:28.544 Using 'verbs' RDMA provider 00:01:39.079 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:47.271 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:47.530 Creating mk/config.mk...done. 00:01:47.530 Creating mk/cc.flags.mk...done. 00:01:47.530 Type 'make' to build. 00:01:47.530 09:35:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:47.530 09:35:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:47.530 09:35:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:47.530 09:35:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.530 ************************************ 00:01:47.530 START TEST make 00:01:47.530 ************************************ 00:01:47.530 09:35:04 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:47.788 make[1]: Nothing to be done for 'all'. 00:01:49.182 The Meson build system 00:01:49.182 Version: 1.3.1 00:01:49.182 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:49.182 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.182 Build type: native build 00:01:49.182 Project name: libvfio-user 00:01:49.182 Project version: 0.0.1 00:01:49.182 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.182 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:49.182 Host machine cpu family: x86_64 00:01:49.182 Host machine cpu: x86_64 00:01:49.182 Run-time dependency threads found: YES 00:01:49.182 Library dl found: YES 00:01:49.182 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.182 Run-time dependency json-c found: YES 0.17 00:01:49.182 Run-time dependency cmocka found: YES 1.1.7 00:01:49.182 Program pytest-3 found: NO 00:01:49.182 Program flake8 found: NO 00:01:49.182 Program misspell-fixer found: NO 00:01:49.182 Program restructuredtext-lint found: NO 00:01:49.182 Program valgrind found: YES (/usr/bin/valgrind) 00:01:49.182 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.182 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.182 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.182 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.182 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:49.182 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:49.182 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.182 Build targets in project: 8 00:01:49.182 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:49.182 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:49.182 00:01:49.182 libvfio-user 0.0.1 00:01:49.182 00:01:49.182 User defined options 00:01:49.182 buildtype : debug 00:01:49.182 default_library: shared 00:01:49.182 libdir : /usr/local/lib 00:01:49.182 00:01:49.182 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.131 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:50.131 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:50.131 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:50.131 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:50.131 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:50.395 [5/37] Compiling C object samples/null.p/null.c.o 00:01:50.395 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:50.395 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:50.395 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:50.395 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:50.395 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:50.395 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:50.395 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:50.395 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:50.395 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:50.395 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:50.395 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:50.395 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:50.395 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:50.395 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:50.395 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:50.395 [21/37] Compiling C object samples/server.p/server.c.o 00:01:50.395 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:50.395 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:50.395 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:50.395 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:50.654 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:50.654 [27/37] Compiling C object samples/client.p/client.c.o 00:01:50.654 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:50.654 [29/37] Linking target samples/client 00:01:50.654 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:50.654 [31/37] Linking target test/unit_tests 00:01:50.919 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:50.919 [33/37] Linking target samples/gpio-pci-idio-16 00:01:50.919 [34/37] Linking target samples/lspci 00:01:50.919 [35/37] Linking target samples/server 00:01:50.919 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:50.919 [37/37] Linking target samples/null 00:01:50.919 INFO: autodetecting backend as ninja 00:01:50.919 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.919 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:51.864 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:51.864 ninja: no work to do. 00:02:04.059 CC lib/ut_mock/mock.o 00:02:04.059 CC lib/ut/ut.o 00:02:04.059 CC lib/log/log.o 00:02:04.059 CC lib/log/log_flags.o 00:02:04.059 CC lib/log/log_deprecated.o 00:02:04.059 LIB libspdk_log.a 00:02:04.059 LIB libspdk_ut_mock.a 00:02:04.059 LIB libspdk_ut.a 00:02:04.059 SO libspdk_ut_mock.so.6.0 00:02:04.059 SO libspdk_ut.so.2.0 00:02:04.059 SO libspdk_log.so.7.0 00:02:04.059 SYMLINK libspdk_ut_mock.so 00:02:04.059 SYMLINK libspdk_ut.so 00:02:04.059 SYMLINK libspdk_log.so 00:02:04.059 CXX lib/trace_parser/trace.o 00:02:04.059 CC lib/ioat/ioat.o 00:02:04.059 CC lib/util/base64.o 00:02:04.059 CC lib/dma/dma.o 00:02:04.059 CC lib/util/bit_array.o 00:02:04.059 CC lib/util/cpuset.o 00:02:04.059 CC lib/util/crc16.o 00:02:04.059 CC lib/util/crc32.o 00:02:04.059 CC lib/util/crc32c.o 00:02:04.059 CC lib/util/crc32_ieee.o 00:02:04.059 CC lib/util/crc64.o 00:02:04.059 CC lib/util/dif.o 00:02:04.059 CC lib/util/fd.o 00:02:04.059 CC lib/util/file.o 00:02:04.059 CC lib/util/hexlify.o 00:02:04.059 CC lib/util/iov.o 00:02:04.059 CC lib/util/math.o 00:02:04.059 CC lib/util/pipe.o 00:02:04.059 CC lib/util/strerror_tls.o 00:02:04.059 CC lib/util/string.o 00:02:04.059 CC lib/util/uuid.o 00:02:04.059 CC lib/util/fd_group.o 00:02:04.059 CC lib/util/xor.o 00:02:04.059 CC lib/util/zipf.o 00:02:04.059 CC lib/vfio_user/host/vfio_user_pci.o 00:02:04.059 CC lib/vfio_user/host/vfio_user.o 00:02:04.318 LIB libspdk_dma.a 00:02:04.318 SO libspdk_dma.so.4.0 00:02:04.318 SYMLINK libspdk_dma.so 00:02:04.318 LIB libspdk_ioat.a 00:02:04.318 SO libspdk_ioat.so.7.0 00:02:04.318 SYMLINK libspdk_ioat.so 00:02:04.318 LIB libspdk_vfio_user.a 00:02:04.318 SO libspdk_vfio_user.so.5.0 00:02:04.577 SYMLINK libspdk_vfio_user.so 00:02:04.577 LIB libspdk_util.a 00:02:04.577 SO libspdk_util.so.9.1 00:02:04.836 SYMLINK libspdk_util.so 00:02:04.836 CC lib/idxd/idxd.o 00:02:04.836 CC lib/rdma_provider/common.o 00:02:04.836 CC lib/json/json_parse.o 00:02:04.836 CC lib/env_dpdk/env.o 00:02:04.836 CC lib/rdma_utils/rdma_utils.o 00:02:04.836 CC lib/conf/conf.o 00:02:04.836 CC lib/vmd/vmd.o 00:02:04.836 CC lib/idxd/idxd_user.o 00:02:04.836 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:04.836 CC lib/json/json_util.o 00:02:04.836 CC lib/env_dpdk/memory.o 00:02:04.836 CC lib/idxd/idxd_kernel.o 00:02:04.836 CC lib/vmd/led.o 00:02:04.836 CC lib/env_dpdk/pci.o 00:02:04.836 CC lib/json/json_write.o 00:02:04.836 CC lib/env_dpdk/init.o 00:02:04.836 CC lib/env_dpdk/threads.o 00:02:04.836 CC lib/env_dpdk/pci_ioat.o 00:02:04.836 CC lib/env_dpdk/pci_virtio.o 00:02:04.836 CC lib/env_dpdk/pci_vmd.o 00:02:04.836 CC lib/env_dpdk/pci_idxd.o 00:02:04.836 CC lib/env_dpdk/sigbus_handler.o 00:02:04.836 CC lib/env_dpdk/pci_event.o 00:02:04.836 CC lib/env_dpdk/pci_dpdk.o 00:02:04.836 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:04.836 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:04.836 LIB libspdk_trace_parser.a 00:02:04.836 SO libspdk_trace_parser.so.5.0 00:02:05.094 SYMLINK libspdk_trace_parser.so 00:02:05.094 LIB libspdk_rdma_provider.a 00:02:05.094 SO libspdk_rdma_provider.so.6.0 00:02:05.094 LIB libspdk_rdma_utils.a 00:02:05.094 SYMLINK libspdk_rdma_provider.so 00:02:05.094 LIB libspdk_json.a 00:02:05.094 SO libspdk_rdma_utils.so.1.0 00:02:05.353 LIB libspdk_conf.a 00:02:05.353 SO libspdk_json.so.6.0 00:02:05.353 SO libspdk_conf.so.6.0 00:02:05.353 SYMLINK libspdk_rdma_utils.so 00:02:05.353 SYMLINK libspdk_json.so 00:02:05.353 SYMLINK libspdk_conf.so 00:02:05.353 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.353 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.353 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.353 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.611 LIB libspdk_idxd.a 00:02:05.611 LIB libspdk_vmd.a 00:02:05.611 SO libspdk_idxd.so.12.0 00:02:05.611 SO libspdk_vmd.so.6.0 00:02:05.611 SYMLINK libspdk_idxd.so 00:02:05.611 SYMLINK libspdk_vmd.so 00:02:05.611 LIB libspdk_jsonrpc.a 00:02:05.869 SO libspdk_jsonrpc.so.6.0 00:02:05.869 SYMLINK libspdk_jsonrpc.so 00:02:05.869 CC lib/rpc/rpc.o 00:02:06.127 LIB libspdk_rpc.a 00:02:06.127 SO libspdk_rpc.so.6.0 00:02:06.411 SYMLINK libspdk_rpc.so 00:02:06.411 LIB libspdk_env_dpdk.a 00:02:06.411 SO libspdk_env_dpdk.so.14.1 00:02:06.411 CC lib/keyring/keyring.o 00:02:06.411 CC lib/trace/trace.o 00:02:06.411 CC lib/notify/notify.o 00:02:06.411 CC lib/keyring/keyring_rpc.o 00:02:06.411 CC lib/notify/notify_rpc.o 00:02:06.411 CC lib/trace/trace_flags.o 00:02:06.411 CC lib/trace/trace_rpc.o 00:02:06.411 SYMLINK libspdk_env_dpdk.so 00:02:06.667 LIB libspdk_notify.a 00:02:06.667 SO libspdk_notify.so.6.0 00:02:06.667 LIB libspdk_keyring.a 00:02:06.667 SYMLINK libspdk_notify.so 00:02:06.667 LIB libspdk_trace.a 00:02:06.667 SO libspdk_keyring.so.1.0 00:02:06.667 SO libspdk_trace.so.10.0 00:02:06.667 SYMLINK libspdk_keyring.so 00:02:06.924 SYMLINK libspdk_trace.so 00:02:06.924 CC lib/thread/thread.o 00:02:06.924 CC lib/thread/iobuf.o 00:02:06.924 CC lib/sock/sock.o 00:02:06.924 CC lib/sock/sock_rpc.o 00:02:07.489 LIB libspdk_sock.a 00:02:07.489 SO libspdk_sock.so.10.0 00:02:07.489 SYMLINK libspdk_sock.so 00:02:07.489 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.489 CC lib/nvme/nvme_ctrlr.o 00:02:07.489 CC lib/nvme/nvme_fabric.o 00:02:07.489 CC lib/nvme/nvme_ns_cmd.o 00:02:07.489 CC lib/nvme/nvme_ns.o 00:02:07.489 CC lib/nvme/nvme_pcie_common.o 00:02:07.489 CC lib/nvme/nvme_pcie.o 00:02:07.489 CC lib/nvme/nvme_qpair.o 00:02:07.489 CC lib/nvme/nvme.o 00:02:07.489 CC lib/nvme/nvme_quirks.o 00:02:07.489 CC lib/nvme/nvme_transport.o 00:02:07.489 CC lib/nvme/nvme_discovery.o 00:02:07.489 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.489 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.489 CC lib/nvme/nvme_tcp.o 00:02:07.489 CC lib/nvme/nvme_opal.o 00:02:07.489 CC lib/nvme/nvme_io_msg.o 00:02:07.489 CC lib/nvme/nvme_poll_group.o 00:02:07.489 CC lib/nvme/nvme_zns.o 00:02:07.489 CC lib/nvme/nvme_stubs.o 00:02:07.489 CC lib/nvme/nvme_auth.o 00:02:07.489 CC lib/nvme/nvme_cuse.o 00:02:07.489 CC lib/nvme/nvme_vfio_user.o 00:02:07.489 CC lib/nvme/nvme_rdma.o 00:02:08.887 LIB libspdk_thread.a 00:02:08.887 SO libspdk_thread.so.10.1 00:02:08.887 SYMLINK libspdk_thread.so 00:02:08.887 CC lib/virtio/virtio.o 00:02:08.887 CC lib/accel/accel.o 00:02:08.887 CC lib/init/json_config.o 00:02:08.887 CC lib/vfu_tgt/tgt_endpoint.o 00:02:08.887 CC lib/blob/blobstore.o 00:02:08.887 CC lib/virtio/virtio_vhost_user.o 00:02:08.887 CC lib/accel/accel_rpc.o 00:02:08.887 CC lib/init/subsystem.o 00:02:08.887 CC lib/vfu_tgt/tgt_rpc.o 00:02:08.887 CC lib/blob/request.o 00:02:08.887 CC lib/virtio/virtio_vfio_user.o 00:02:08.887 CC lib/init/subsystem_rpc.o 00:02:08.887 CC lib/accel/accel_sw.o 00:02:08.887 CC lib/blob/zeroes.o 00:02:08.887 CC lib/virtio/virtio_pci.o 00:02:08.887 CC lib/init/rpc.o 00:02:08.887 CC lib/blob/blob_bs_dev.o 00:02:09.145 LIB libspdk_init.a 00:02:09.145 SO libspdk_init.so.5.0 00:02:09.145 LIB libspdk_vfu_tgt.a 00:02:09.145 LIB libspdk_virtio.a 00:02:09.145 SYMLINK libspdk_init.so 00:02:09.145 SO libspdk_vfu_tgt.so.3.0 00:02:09.145 SO libspdk_virtio.so.7.0 00:02:09.145 SYMLINK libspdk_vfu_tgt.so 00:02:09.403 SYMLINK libspdk_virtio.so 00:02:09.403 CC lib/event/app.o 00:02:09.403 CC lib/event/reactor.o 00:02:09.403 CC lib/event/log_rpc.o 00:02:09.403 CC lib/event/app_rpc.o 00:02:09.403 CC lib/event/scheduler_static.o 00:02:09.660 LIB libspdk_event.a 00:02:09.917 SO libspdk_event.so.14.0 00:02:09.917 LIB libspdk_accel.a 00:02:09.917 SYMLINK libspdk_event.so 00:02:09.917 SO libspdk_accel.so.15.1 00:02:09.917 SYMLINK libspdk_accel.so 00:02:09.917 LIB libspdk_nvme.a 00:02:10.175 SO libspdk_nvme.so.13.1 00:02:10.175 CC lib/bdev/bdev.o 00:02:10.175 CC lib/bdev/bdev_rpc.o 00:02:10.175 CC lib/bdev/bdev_zone.o 00:02:10.175 CC lib/bdev/part.o 00:02:10.175 CC lib/bdev/scsi_nvme.o 00:02:10.432 SYMLINK libspdk_nvme.so 00:02:11.804 LIB libspdk_blob.a 00:02:11.804 SO libspdk_blob.so.11.0 00:02:11.804 SYMLINK libspdk_blob.so 00:02:12.062 CC lib/blobfs/blobfs.o 00:02:12.062 CC lib/blobfs/tree.o 00:02:12.062 CC lib/lvol/lvol.o 00:02:12.627 LIB libspdk_bdev.a 00:02:12.627 SO libspdk_bdev.so.15.1 00:02:12.888 SYMLINK libspdk_bdev.so 00:02:12.888 LIB libspdk_blobfs.a 00:02:12.888 SO libspdk_blobfs.so.10.0 00:02:12.888 LIB libspdk_lvol.a 00:02:12.888 SYMLINK libspdk_blobfs.so 00:02:12.888 CC lib/nbd/nbd.o 00:02:12.888 CC lib/ublk/ublk.o 00:02:12.888 CC lib/nbd/nbd_rpc.o 00:02:12.888 CC lib/scsi/dev.o 00:02:12.888 CC lib/ublk/ublk_rpc.o 00:02:12.888 CC lib/nvmf/ctrlr.o 00:02:12.888 CC lib/scsi/lun.o 00:02:12.888 CC lib/nvmf/ctrlr_discovery.o 00:02:12.888 CC lib/ftl/ftl_core.o 00:02:12.888 CC lib/scsi/port.o 00:02:12.888 CC lib/nvmf/ctrlr_bdev.o 00:02:12.888 CC lib/scsi/scsi.o 00:02:12.888 CC lib/ftl/ftl_init.o 00:02:12.888 CC lib/nvmf/subsystem.o 00:02:12.888 CC lib/scsi/scsi_bdev.o 00:02:12.888 CC lib/ftl/ftl_layout.o 00:02:12.888 CC lib/nvmf/nvmf.o 00:02:12.888 CC lib/scsi/scsi_pr.o 00:02:12.888 CC lib/ftl/ftl_debug.o 00:02:12.888 CC lib/nvmf/nvmf_rpc.o 00:02:12.888 CC lib/ftl/ftl_io.o 00:02:12.888 CC lib/scsi/task.o 00:02:12.888 CC lib/scsi/scsi_rpc.o 00:02:12.888 CC lib/nvmf/transport.o 00:02:12.888 CC lib/nvmf/tcp.o 00:02:12.888 CC lib/ftl/ftl_sb.o 00:02:12.888 CC lib/ftl/ftl_l2p.o 00:02:12.888 CC lib/nvmf/stubs.o 00:02:12.888 CC lib/ftl/ftl_nv_cache.o 00:02:12.888 CC lib/ftl/ftl_l2p_flat.o 00:02:12.888 CC lib/nvmf/mdns_server.o 00:02:12.888 CC lib/nvmf/vfio_user.o 00:02:12.888 CC lib/ftl/ftl_band.o 00:02:12.888 CC lib/nvmf/rdma.o 00:02:12.888 CC lib/ftl/ftl_band_ops.o 00:02:12.889 CC lib/nvmf/auth.o 00:02:12.889 CC lib/ftl/ftl_writer.o 00:02:12.889 CC lib/ftl/ftl_rq.o 00:02:12.889 CC lib/ftl/ftl_reloc.o 00:02:12.889 CC lib/ftl/ftl_l2p_cache.o 00:02:12.889 CC lib/ftl/ftl_p2l.o 00:02:12.889 CC lib/ftl/mngt/ftl_mngt.o 00:02:12.889 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:12.889 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:12.889 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:12.889 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:12.889 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:12.889 SO libspdk_lvol.so.10.0 00:02:13.153 SYMLINK libspdk_lvol.so 00:02:13.153 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:13.419 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.419 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.419 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.419 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.419 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.419 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.419 CC lib/ftl/utils/ftl_conf.o 00:02:13.419 CC lib/ftl/utils/ftl_md.o 00:02:13.419 CC lib/ftl/utils/ftl_mempool.o 00:02:13.419 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.419 CC lib/ftl/utils/ftl_property.o 00:02:13.419 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.419 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.419 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.419 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.419 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.419 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.677 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:13.677 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.677 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.677 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.677 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.677 CC lib/ftl/base/ftl_base_dev.o 00:02:13.677 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.677 CC lib/ftl/ftl_trace.o 00:02:13.677 LIB libspdk_nbd.a 00:02:13.935 SO libspdk_nbd.so.7.0 00:02:13.935 LIB libspdk_scsi.a 00:02:13.935 SYMLINK libspdk_nbd.so 00:02:13.935 SO libspdk_scsi.so.9.0 00:02:13.935 SYMLINK libspdk_scsi.so 00:02:14.193 LIB libspdk_ublk.a 00:02:14.193 SO libspdk_ublk.so.3.0 00:02:14.193 SYMLINK libspdk_ublk.so 00:02:14.193 CC lib/vhost/vhost.o 00:02:14.193 CC lib/iscsi/conn.o 00:02:14.193 CC lib/iscsi/init_grp.o 00:02:14.193 CC lib/vhost/vhost_rpc.o 00:02:14.193 CC lib/iscsi/iscsi.o 00:02:14.193 CC lib/vhost/vhost_scsi.o 00:02:14.193 CC lib/iscsi/md5.o 00:02:14.193 CC lib/vhost/vhost_blk.o 00:02:14.193 CC lib/iscsi/param.o 00:02:14.193 CC lib/vhost/rte_vhost_user.o 00:02:14.193 CC lib/iscsi/portal_grp.o 00:02:14.193 CC lib/iscsi/tgt_node.o 00:02:14.193 CC lib/iscsi/iscsi_subsystem.o 00:02:14.193 CC lib/iscsi/iscsi_rpc.o 00:02:14.193 CC lib/iscsi/task.o 00:02:14.451 LIB libspdk_ftl.a 00:02:14.451 SO libspdk_ftl.so.9.0 00:02:15.015 SYMLINK libspdk_ftl.so 00:02:15.274 LIB libspdk_vhost.a 00:02:15.531 SO libspdk_vhost.so.8.0 00:02:15.531 SYMLINK libspdk_vhost.so 00:02:15.531 LIB libspdk_nvmf.a 00:02:15.531 SO libspdk_nvmf.so.18.1 00:02:15.531 LIB libspdk_iscsi.a 00:02:15.788 SO libspdk_iscsi.so.8.0 00:02:15.788 SYMLINK libspdk_nvmf.so 00:02:15.788 SYMLINK libspdk_iscsi.so 00:02:16.045 CC module/vfu_device/vfu_virtio.o 00:02:16.045 CC module/env_dpdk/env_dpdk_rpc.o 00:02:16.045 CC module/vfu_device/vfu_virtio_blk.o 00:02:16.045 CC module/vfu_device/vfu_virtio_scsi.o 00:02:16.045 CC module/vfu_device/vfu_virtio_rpc.o 00:02:16.303 CC module/accel/error/accel_error.o 00:02:16.303 CC module/accel/dsa/accel_dsa.o 00:02:16.303 CC module/accel/dsa/accel_dsa_rpc.o 00:02:16.303 CC module/keyring/file/keyring.o 00:02:16.303 CC module/accel/error/accel_error_rpc.o 00:02:16.303 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:16.303 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:16.303 CC module/sock/posix/posix.o 00:02:16.303 CC module/accel/ioat/accel_ioat.o 00:02:16.303 CC module/keyring/file/keyring_rpc.o 00:02:16.303 CC module/blob/bdev/blob_bdev.o 00:02:16.303 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.303 CC module/accel/iaa/accel_iaa.o 00:02:16.303 CC module/accel/iaa/accel_iaa_rpc.o 00:02:16.303 CC module/keyring/linux/keyring.o 00:02:16.303 CC module/keyring/linux/keyring_rpc.o 00:02:16.303 CC module/scheduler/gscheduler/gscheduler.o 00:02:16.303 LIB libspdk_env_dpdk_rpc.a 00:02:16.303 SO libspdk_env_dpdk_rpc.so.6.0 00:02:16.303 SYMLINK libspdk_env_dpdk_rpc.so 00:02:16.303 LIB libspdk_keyring_file.a 00:02:16.303 LIB libspdk_keyring_linux.a 00:02:16.303 LIB libspdk_scheduler_gscheduler.a 00:02:16.303 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.303 SO libspdk_keyring_linux.so.1.0 00:02:16.303 SO libspdk_keyring_file.so.1.0 00:02:16.303 SO libspdk_scheduler_gscheduler.so.4.0 00:02:16.303 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:16.303 LIB libspdk_accel_error.a 00:02:16.303 LIB libspdk_accel_ioat.a 00:02:16.303 LIB libspdk_scheduler_dynamic.a 00:02:16.560 SO libspdk_accel_error.so.2.0 00:02:16.560 LIB libspdk_accel_iaa.a 00:02:16.560 SO libspdk_scheduler_dynamic.so.4.0 00:02:16.560 SO libspdk_accel_ioat.so.6.0 00:02:16.560 SYMLINK libspdk_scheduler_gscheduler.so 00:02:16.560 SYMLINK libspdk_keyring_file.so 00:02:16.561 SYMLINK libspdk_keyring_linux.so 00:02:16.561 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:16.561 SO libspdk_accel_iaa.so.3.0 00:02:16.561 LIB libspdk_accel_dsa.a 00:02:16.561 SYMLINK libspdk_accel_error.so 00:02:16.561 SYMLINK libspdk_scheduler_dynamic.so 00:02:16.561 LIB libspdk_blob_bdev.a 00:02:16.561 SYMLINK libspdk_accel_ioat.so 00:02:16.561 SO libspdk_accel_dsa.so.5.0 00:02:16.561 SO libspdk_blob_bdev.so.11.0 00:02:16.561 SYMLINK libspdk_accel_iaa.so 00:02:16.561 SYMLINK libspdk_blob_bdev.so 00:02:16.561 SYMLINK libspdk_accel_dsa.so 00:02:16.825 LIB libspdk_vfu_device.a 00:02:16.825 SO libspdk_vfu_device.so.3.0 00:02:16.825 CC module/bdev/lvol/vbdev_lvol.o 00:02:16.825 CC module/bdev/error/vbdev_error.o 00:02:16.825 CC module/bdev/null/bdev_null.o 00:02:16.825 CC module/bdev/gpt/gpt.o 00:02:16.825 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:16.825 CC module/blobfs/bdev/blobfs_bdev.o 00:02:16.825 CC module/bdev/gpt/vbdev_gpt.o 00:02:16.825 CC module/bdev/delay/vbdev_delay.o 00:02:16.825 CC module/bdev/error/vbdev_error_rpc.o 00:02:16.825 CC module/bdev/malloc/bdev_malloc.o 00:02:16.825 CC module/bdev/nvme/bdev_nvme.o 00:02:16.825 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:16.825 CC module/bdev/null/bdev_null_rpc.o 00:02:16.825 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:16.825 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:16.825 CC module/bdev/nvme/nvme_rpc.o 00:02:16.825 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:16.825 CC module/bdev/split/vbdev_split.o 00:02:16.825 CC module/bdev/nvme/bdev_mdns_client.o 00:02:16.825 CC module/bdev/nvme/vbdev_opal.o 00:02:16.825 CC module/bdev/passthru/vbdev_passthru.o 00:02:16.825 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:16.825 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:16.825 CC module/bdev/split/vbdev_split_rpc.o 00:02:16.825 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:16.825 CC module/bdev/raid/bdev_raid.o 00:02:16.825 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:16.825 CC module/bdev/raid/bdev_raid_rpc.o 00:02:16.825 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:16.825 CC module/bdev/raid/bdev_raid_sb.o 00:02:16.825 CC module/bdev/raid/raid0.o 00:02:16.825 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:16.825 CC module/bdev/ftl/bdev_ftl.o 00:02:16.825 CC module/bdev/aio/bdev_aio.o 00:02:16.825 CC module/bdev/raid/raid1.o 00:02:16.825 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:16.825 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:16.825 CC module/bdev/aio/bdev_aio_rpc.o 00:02:16.825 CC module/bdev/raid/concat.o 00:02:16.825 CC module/bdev/iscsi/bdev_iscsi.o 00:02:16.825 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:16.825 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:16.825 SYMLINK libspdk_vfu_device.so 00:02:17.084 LIB libspdk_sock_posix.a 00:02:17.084 LIB libspdk_bdev_split.a 00:02:17.084 SO libspdk_sock_posix.so.6.0 00:02:17.084 LIB libspdk_blobfs_bdev.a 00:02:17.342 LIB libspdk_bdev_gpt.a 00:02:17.342 SO libspdk_bdev_split.so.6.0 00:02:17.342 SO libspdk_blobfs_bdev.so.6.0 00:02:17.342 SO libspdk_bdev_gpt.so.6.0 00:02:17.342 SYMLINK libspdk_sock_posix.so 00:02:17.342 LIB libspdk_bdev_zone_block.a 00:02:17.342 LIB libspdk_bdev_error.a 00:02:17.342 SYMLINK libspdk_blobfs_bdev.so 00:02:17.342 SYMLINK libspdk_bdev_gpt.so 00:02:17.342 LIB libspdk_bdev_null.a 00:02:17.342 SO libspdk_bdev_zone_block.so.6.0 00:02:17.342 SYMLINK libspdk_bdev_split.so 00:02:17.342 SO libspdk_bdev_error.so.6.0 00:02:17.342 SO libspdk_bdev_null.so.6.0 00:02:17.342 LIB libspdk_bdev_ftl.a 00:02:17.342 LIB libspdk_bdev_delay.a 00:02:17.342 SYMLINK libspdk_bdev_zone_block.so 00:02:17.342 LIB libspdk_bdev_malloc.a 00:02:17.342 SYMLINK libspdk_bdev_error.so 00:02:17.342 LIB libspdk_bdev_passthru.a 00:02:17.342 SO libspdk_bdev_ftl.so.6.0 00:02:17.342 SYMLINK libspdk_bdev_null.so 00:02:17.342 SO libspdk_bdev_delay.so.6.0 00:02:17.342 LIB libspdk_bdev_iscsi.a 00:02:17.342 SO libspdk_bdev_malloc.so.6.0 00:02:17.342 LIB libspdk_bdev_aio.a 00:02:17.342 SO libspdk_bdev_passthru.so.6.0 00:02:17.342 SO libspdk_bdev_iscsi.so.6.0 00:02:17.342 SO libspdk_bdev_aio.so.6.0 00:02:17.342 SYMLINK libspdk_bdev_ftl.so 00:02:17.342 SYMLINK libspdk_bdev_delay.so 00:02:17.600 SYMLINK libspdk_bdev_malloc.so 00:02:17.600 SYMLINK libspdk_bdev_passthru.so 00:02:17.600 SYMLINK libspdk_bdev_iscsi.so 00:02:17.600 SYMLINK libspdk_bdev_aio.so 00:02:17.600 LIB libspdk_bdev_lvol.a 00:02:17.600 LIB libspdk_bdev_virtio.a 00:02:17.600 SO libspdk_bdev_lvol.so.6.0 00:02:17.600 SO libspdk_bdev_virtio.so.6.0 00:02:17.600 SYMLINK libspdk_bdev_lvol.so 00:02:17.600 SYMLINK libspdk_bdev_virtio.so 00:02:18.166 LIB libspdk_bdev_raid.a 00:02:18.166 SO libspdk_bdev_raid.so.6.0 00:02:18.166 SYMLINK libspdk_bdev_raid.so 00:02:19.101 LIB libspdk_bdev_nvme.a 00:02:19.359 SO libspdk_bdev_nvme.so.7.0 00:02:19.359 SYMLINK libspdk_bdev_nvme.so 00:02:19.616 CC module/event/subsystems/vmd/vmd.o 00:02:19.616 CC module/event/subsystems/keyring/keyring.o 00:02:19.616 CC module/event/subsystems/iobuf/iobuf.o 00:02:19.616 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:19.616 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:19.616 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:19.616 CC module/event/subsystems/scheduler/scheduler.o 00:02:19.616 CC module/event/subsystems/sock/sock.o 00:02:19.616 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:19.875 LIB libspdk_event_keyring.a 00:02:19.875 LIB libspdk_event_vhost_blk.a 00:02:19.875 LIB libspdk_event_scheduler.a 00:02:19.875 LIB libspdk_event_vfu_tgt.a 00:02:19.875 LIB libspdk_event_sock.a 00:02:19.875 LIB libspdk_event_vmd.a 00:02:19.875 LIB libspdk_event_iobuf.a 00:02:19.875 SO libspdk_event_keyring.so.1.0 00:02:19.875 SO libspdk_event_vhost_blk.so.3.0 00:02:19.875 SO libspdk_event_scheduler.so.4.0 00:02:19.875 SO libspdk_event_vfu_tgt.so.3.0 00:02:19.875 SO libspdk_event_sock.so.5.0 00:02:19.875 SO libspdk_event_vmd.so.6.0 00:02:19.875 SO libspdk_event_iobuf.so.3.0 00:02:19.875 SYMLINK libspdk_event_keyring.so 00:02:19.875 SYMLINK libspdk_event_vhost_blk.so 00:02:19.875 SYMLINK libspdk_event_scheduler.so 00:02:19.875 SYMLINK libspdk_event_vfu_tgt.so 00:02:19.875 SYMLINK libspdk_event_sock.so 00:02:19.875 SYMLINK libspdk_event_vmd.so 00:02:19.875 SYMLINK libspdk_event_iobuf.so 00:02:20.133 CC module/event/subsystems/accel/accel.o 00:02:20.390 LIB libspdk_event_accel.a 00:02:20.390 SO libspdk_event_accel.so.6.0 00:02:20.390 SYMLINK libspdk_event_accel.so 00:02:20.648 CC module/event/subsystems/bdev/bdev.o 00:02:20.648 LIB libspdk_event_bdev.a 00:02:20.906 SO libspdk_event_bdev.so.6.0 00:02:20.906 SYMLINK libspdk_event_bdev.so 00:02:20.906 CC module/event/subsystems/nbd/nbd.o 00:02:20.906 CC module/event/subsystems/scsi/scsi.o 00:02:20.906 CC module/event/subsystems/ublk/ublk.o 00:02:20.906 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:20.906 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:21.164 LIB libspdk_event_nbd.a 00:02:21.164 LIB libspdk_event_ublk.a 00:02:21.164 LIB libspdk_event_scsi.a 00:02:21.164 SO libspdk_event_ublk.so.3.0 00:02:21.164 SO libspdk_event_nbd.so.6.0 00:02:21.164 SO libspdk_event_scsi.so.6.0 00:02:21.164 SYMLINK libspdk_event_nbd.so 00:02:21.164 SYMLINK libspdk_event_ublk.so 00:02:21.164 LIB libspdk_event_nvmf.a 00:02:21.164 SYMLINK libspdk_event_scsi.so 00:02:21.164 SO libspdk_event_nvmf.so.6.0 00:02:21.423 SYMLINK libspdk_event_nvmf.so 00:02:21.423 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:21.423 CC module/event/subsystems/iscsi/iscsi.o 00:02:21.423 LIB libspdk_event_vhost_scsi.a 00:02:21.681 SO libspdk_event_vhost_scsi.so.3.0 00:02:21.681 LIB libspdk_event_iscsi.a 00:02:21.681 SO libspdk_event_iscsi.so.6.0 00:02:21.681 SYMLINK libspdk_event_vhost_scsi.so 00:02:21.681 SYMLINK libspdk_event_iscsi.so 00:02:21.681 SO libspdk.so.6.0 00:02:21.681 SYMLINK libspdk.so 00:02:21.951 CC app/trace_record/trace_record.o 00:02:21.951 CXX app/trace/trace.o 00:02:21.951 CC app/spdk_lspci/spdk_lspci.o 00:02:21.951 CC app/spdk_nvme_discover/discovery_aer.o 00:02:21.951 CC app/spdk_nvme_perf/perf.o 00:02:21.951 CC app/spdk_nvme_identify/identify.o 00:02:21.951 CC app/spdk_top/spdk_top.o 00:02:21.951 CC test/rpc_client/rpc_client_test.o 00:02:21.951 TEST_HEADER include/spdk/accel.h 00:02:21.951 TEST_HEADER include/spdk/accel_module.h 00:02:21.951 TEST_HEADER include/spdk/assert.h 00:02:21.951 TEST_HEADER include/spdk/barrier.h 00:02:21.951 TEST_HEADER include/spdk/base64.h 00:02:21.951 TEST_HEADER include/spdk/bdev.h 00:02:21.952 TEST_HEADER include/spdk/bdev_module.h 00:02:21.952 TEST_HEADER include/spdk/bdev_zone.h 00:02:21.952 TEST_HEADER include/spdk/bit_array.h 00:02:21.952 TEST_HEADER include/spdk/bit_pool.h 00:02:21.952 TEST_HEADER include/spdk/blob_bdev.h 00:02:21.952 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:21.952 TEST_HEADER include/spdk/blobfs.h 00:02:21.952 TEST_HEADER include/spdk/blob.h 00:02:21.952 TEST_HEADER include/spdk/conf.h 00:02:21.952 TEST_HEADER include/spdk/config.h 00:02:21.952 TEST_HEADER include/spdk/cpuset.h 00:02:21.952 TEST_HEADER include/spdk/crc16.h 00:02:21.952 TEST_HEADER include/spdk/crc32.h 00:02:21.952 TEST_HEADER include/spdk/crc64.h 00:02:21.952 TEST_HEADER include/spdk/dif.h 00:02:21.952 TEST_HEADER include/spdk/dma.h 00:02:21.952 TEST_HEADER include/spdk/endian.h 00:02:21.952 TEST_HEADER include/spdk/env_dpdk.h 00:02:21.952 TEST_HEADER include/spdk/event.h 00:02:21.952 TEST_HEADER include/spdk/env.h 00:02:21.952 TEST_HEADER include/spdk/fd_group.h 00:02:21.952 TEST_HEADER include/spdk/file.h 00:02:21.952 TEST_HEADER include/spdk/fd.h 00:02:21.952 TEST_HEADER include/spdk/ftl.h 00:02:21.952 TEST_HEADER include/spdk/gpt_spec.h 00:02:21.952 TEST_HEADER include/spdk/hexlify.h 00:02:21.952 TEST_HEADER include/spdk/histogram_data.h 00:02:21.952 TEST_HEADER include/spdk/idxd.h 00:02:21.952 TEST_HEADER include/spdk/idxd_spec.h 00:02:21.952 TEST_HEADER include/spdk/init.h 00:02:21.952 TEST_HEADER include/spdk/ioat.h 00:02:21.952 TEST_HEADER include/spdk/ioat_spec.h 00:02:21.952 TEST_HEADER include/spdk/iscsi_spec.h 00:02:21.952 TEST_HEADER include/spdk/json.h 00:02:21.952 TEST_HEADER include/spdk/jsonrpc.h 00:02:21.952 TEST_HEADER include/spdk/keyring.h 00:02:21.952 TEST_HEADER include/spdk/keyring_module.h 00:02:21.952 TEST_HEADER include/spdk/likely.h 00:02:21.952 TEST_HEADER include/spdk/log.h 00:02:21.952 TEST_HEADER include/spdk/lvol.h 00:02:21.952 TEST_HEADER include/spdk/mmio.h 00:02:21.952 TEST_HEADER include/spdk/memory.h 00:02:21.952 TEST_HEADER include/spdk/nbd.h 00:02:21.952 TEST_HEADER include/spdk/notify.h 00:02:21.952 TEST_HEADER include/spdk/nvme_intel.h 00:02:21.952 TEST_HEADER include/spdk/nvme.h 00:02:21.952 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:21.952 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:21.952 TEST_HEADER include/spdk/nvme_spec.h 00:02:21.952 TEST_HEADER include/spdk/nvme_zns.h 00:02:21.952 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:21.952 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:21.952 TEST_HEADER include/spdk/nvmf.h 00:02:21.952 TEST_HEADER include/spdk/nvmf_spec.h 00:02:21.952 TEST_HEADER include/spdk/nvmf_transport.h 00:02:21.952 TEST_HEADER include/spdk/opal.h 00:02:21.952 TEST_HEADER include/spdk/opal_spec.h 00:02:21.952 TEST_HEADER include/spdk/pci_ids.h 00:02:21.952 TEST_HEADER include/spdk/pipe.h 00:02:21.952 TEST_HEADER include/spdk/queue.h 00:02:21.952 TEST_HEADER include/spdk/rpc.h 00:02:21.952 TEST_HEADER include/spdk/reduce.h 00:02:21.952 TEST_HEADER include/spdk/scheduler.h 00:02:21.952 TEST_HEADER include/spdk/scsi.h 00:02:21.952 TEST_HEADER include/spdk/scsi_spec.h 00:02:21.952 TEST_HEADER include/spdk/sock.h 00:02:21.952 TEST_HEADER include/spdk/stdinc.h 00:02:21.952 TEST_HEADER include/spdk/string.h 00:02:21.952 TEST_HEADER include/spdk/thread.h 00:02:21.952 CC app/spdk_dd/spdk_dd.o 00:02:21.952 TEST_HEADER include/spdk/trace.h 00:02:21.952 TEST_HEADER include/spdk/tree.h 00:02:21.952 TEST_HEADER include/spdk/trace_parser.h 00:02:21.952 TEST_HEADER include/spdk/util.h 00:02:21.952 TEST_HEADER include/spdk/ublk.h 00:02:21.952 TEST_HEADER include/spdk/uuid.h 00:02:21.952 TEST_HEADER include/spdk/version.h 00:02:21.952 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:21.952 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:21.952 TEST_HEADER include/spdk/vmd.h 00:02:21.952 TEST_HEADER include/spdk/vhost.h 00:02:21.952 TEST_HEADER include/spdk/xor.h 00:02:21.952 TEST_HEADER include/spdk/zipf.h 00:02:21.952 CXX test/cpp_headers/accel.o 00:02:21.952 CXX test/cpp_headers/accel_module.o 00:02:21.952 CXX test/cpp_headers/assert.o 00:02:21.952 CXX test/cpp_headers/barrier.o 00:02:21.952 CXX test/cpp_headers/base64.o 00:02:21.952 CC app/iscsi_tgt/iscsi_tgt.o 00:02:21.952 CXX test/cpp_headers/bdev.o 00:02:21.952 CXX test/cpp_headers/bdev_module.o 00:02:21.952 CXX test/cpp_headers/bdev_zone.o 00:02:21.952 CXX test/cpp_headers/bit_array.o 00:02:21.952 CXX test/cpp_headers/bit_pool.o 00:02:21.952 CXX test/cpp_headers/blob_bdev.o 00:02:21.952 CXX test/cpp_headers/blobfs_bdev.o 00:02:21.952 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:21.952 CXX test/cpp_headers/blobfs.o 00:02:21.952 CXX test/cpp_headers/blob.o 00:02:21.952 CXX test/cpp_headers/conf.o 00:02:21.952 CXX test/cpp_headers/config.o 00:02:21.952 CXX test/cpp_headers/cpuset.o 00:02:21.952 CXX test/cpp_headers/crc16.o 00:02:21.952 CC app/nvmf_tgt/nvmf_main.o 00:02:21.952 CXX test/cpp_headers/crc32.o 00:02:22.215 CC app/spdk_tgt/spdk_tgt.o 00:02:22.215 CC test/thread/poller_perf/poller_perf.o 00:02:22.215 CC examples/ioat/verify/verify.o 00:02:22.215 CC examples/util/zipf/zipf.o 00:02:22.215 CC test/app/stub/stub.o 00:02:22.215 CC test/env/memory/memory_ut.o 00:02:22.215 CC examples/ioat/perf/perf.o 00:02:22.215 CC test/env/vtophys/vtophys.o 00:02:22.215 CC test/app/histogram_perf/histogram_perf.o 00:02:22.215 CC test/app/jsoncat/jsoncat.o 00:02:22.215 CC test/env/pci/pci_ut.o 00:02:22.215 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:22.215 CC app/fio/nvme/fio_plugin.o 00:02:22.215 CC test/dma/test_dma/test_dma.o 00:02:22.215 CC test/app/bdev_svc/bdev_svc.o 00:02:22.215 CC app/fio/bdev/fio_plugin.o 00:02:22.215 LINK spdk_lspci 00:02:22.215 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:22.215 CC test/env/mem_callbacks/mem_callbacks.o 00:02:22.479 LINK rpc_client_test 00:02:22.479 LINK spdk_nvme_discover 00:02:22.479 LINK poller_perf 00:02:22.479 LINK jsoncat 00:02:22.479 LINK histogram_perf 00:02:22.479 LINK vtophys 00:02:22.479 LINK interrupt_tgt 00:02:22.479 LINK zipf 00:02:22.479 CXX test/cpp_headers/crc64.o 00:02:22.479 CXX test/cpp_headers/dif.o 00:02:22.479 CXX test/cpp_headers/dma.o 00:02:22.479 CXX test/cpp_headers/endian.o 00:02:22.479 CXX test/cpp_headers/env_dpdk.o 00:02:22.479 CXX test/cpp_headers/env.o 00:02:22.479 LINK nvmf_tgt 00:02:22.479 CXX test/cpp_headers/event.o 00:02:22.479 CXX test/cpp_headers/fd_group.o 00:02:22.479 CXX test/cpp_headers/fd.o 00:02:22.479 LINK spdk_trace_record 00:02:22.479 CXX test/cpp_headers/file.o 00:02:22.479 LINK iscsi_tgt 00:02:22.479 LINK env_dpdk_post_init 00:02:22.479 LINK stub 00:02:22.479 CXX test/cpp_headers/ftl.o 00:02:22.479 CXX test/cpp_headers/gpt_spec.o 00:02:22.479 CXX test/cpp_headers/hexlify.o 00:02:22.479 CXX test/cpp_headers/histogram_data.o 00:02:22.479 CXX test/cpp_headers/idxd.o 00:02:22.479 CXX test/cpp_headers/idxd_spec.o 00:02:22.479 LINK spdk_tgt 00:02:22.479 LINK bdev_svc 00:02:22.479 LINK verify 00:02:22.479 LINK ioat_perf 00:02:22.479 CXX test/cpp_headers/init.o 00:02:22.740 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:22.740 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:22.740 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:22.740 CXX test/cpp_headers/ioat.o 00:02:22.740 CXX test/cpp_headers/ioat_spec.o 00:02:22.740 CXX test/cpp_headers/iscsi_spec.o 00:02:22.740 LINK spdk_dd 00:02:22.740 CXX test/cpp_headers/json.o 00:02:22.740 CXX test/cpp_headers/jsonrpc.o 00:02:22.740 CXX test/cpp_headers/keyring.o 00:02:22.740 CXX test/cpp_headers/keyring_module.o 00:02:22.740 LINK spdk_trace 00:02:23.003 CXX test/cpp_headers/likely.o 00:02:23.003 CXX test/cpp_headers/log.o 00:02:23.003 CXX test/cpp_headers/lvol.o 00:02:23.003 CXX test/cpp_headers/memory.o 00:02:23.003 CXX test/cpp_headers/mmio.o 00:02:23.003 CXX test/cpp_headers/nbd.o 00:02:23.003 CXX test/cpp_headers/notify.o 00:02:23.003 CXX test/cpp_headers/nvme.o 00:02:23.003 LINK pci_ut 00:02:23.003 CXX test/cpp_headers/nvme_intel.o 00:02:23.003 CXX test/cpp_headers/nvme_ocssd.o 00:02:23.003 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:23.003 CXX test/cpp_headers/nvme_spec.o 00:02:23.003 CXX test/cpp_headers/nvme_zns.o 00:02:23.003 CXX test/cpp_headers/nvmf_cmd.o 00:02:23.003 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:23.003 CXX test/cpp_headers/nvmf.o 00:02:23.003 CXX test/cpp_headers/nvmf_spec.o 00:02:23.003 LINK test_dma 00:02:23.003 CXX test/cpp_headers/nvmf_transport.o 00:02:23.003 CXX test/cpp_headers/opal.o 00:02:23.003 CC test/event/event_perf/event_perf.o 00:02:23.003 CXX test/cpp_headers/opal_spec.o 00:02:23.003 CXX test/cpp_headers/pci_ids.o 00:02:23.003 CC test/event/reactor/reactor.o 00:02:23.003 CC test/event/reactor_perf/reactor_perf.o 00:02:23.003 CXX test/cpp_headers/pipe.o 00:02:23.269 CXX test/cpp_headers/queue.o 00:02:23.269 LINK nvme_fuzz 00:02:23.269 CC test/event/app_repeat/app_repeat.o 00:02:23.269 CC examples/sock/hello_world/hello_sock.o 00:02:23.269 CXX test/cpp_headers/reduce.o 00:02:23.269 CC examples/vmd/led/led.o 00:02:23.269 LINK spdk_bdev 00:02:23.269 CC examples/vmd/lsvmd/lsvmd.o 00:02:23.269 CC examples/idxd/perf/perf.o 00:02:23.269 CXX test/cpp_headers/rpc.o 00:02:23.269 LINK spdk_nvme 00:02:23.269 CXX test/cpp_headers/scheduler.o 00:02:23.269 CXX test/cpp_headers/scsi.o 00:02:23.269 CC test/event/scheduler/scheduler.o 00:02:23.269 CC examples/thread/thread/thread_ex.o 00:02:23.269 CXX test/cpp_headers/scsi_spec.o 00:02:23.269 CXX test/cpp_headers/sock.o 00:02:23.269 CXX test/cpp_headers/stdinc.o 00:02:23.269 CXX test/cpp_headers/string.o 00:02:23.269 CXX test/cpp_headers/thread.o 00:02:23.269 CXX test/cpp_headers/trace.o 00:02:23.269 CXX test/cpp_headers/trace_parser.o 00:02:23.269 CXX test/cpp_headers/tree.o 00:02:23.269 CXX test/cpp_headers/ublk.o 00:02:23.269 CXX test/cpp_headers/util.o 00:02:23.269 CXX test/cpp_headers/uuid.o 00:02:23.528 CXX test/cpp_headers/version.o 00:02:23.528 CXX test/cpp_headers/vfio_user_pci.o 00:02:23.528 CXX test/cpp_headers/vfio_user_spec.o 00:02:23.528 CXX test/cpp_headers/vhost.o 00:02:23.528 LINK event_perf 00:02:23.528 CXX test/cpp_headers/vmd.o 00:02:23.528 CXX test/cpp_headers/xor.o 00:02:23.528 LINK reactor 00:02:23.528 CXX test/cpp_headers/zipf.o 00:02:23.528 LINK reactor_perf 00:02:23.528 LINK lsvmd 00:02:23.528 LINK vhost_fuzz 00:02:23.528 LINK app_repeat 00:02:23.528 LINK led 00:02:23.528 LINK spdk_nvme_perf 00:02:23.528 LINK mem_callbacks 00:02:23.528 CC app/vhost/vhost.o 00:02:23.528 LINK spdk_nvme_identify 00:02:23.528 LINK spdk_top 00:02:23.787 LINK hello_sock 00:02:23.787 LINK scheduler 00:02:23.787 CC test/nvme/reset/reset.o 00:02:23.787 CC test/nvme/aer/aer.o 00:02:23.787 CC test/nvme/err_injection/err_injection.o 00:02:23.787 CC test/nvme/e2edp/nvme_dp.o 00:02:23.787 CC test/nvme/startup/startup.o 00:02:23.787 CC test/nvme/overhead/overhead.o 00:02:23.787 CC test/nvme/sgl/sgl.o 00:02:23.787 CC test/nvme/reserve/reserve.o 00:02:23.787 CC test/nvme/simple_copy/simple_copy.o 00:02:23.787 CC test/nvme/connect_stress/connect_stress.o 00:02:23.787 CC test/nvme/boot_partition/boot_partition.o 00:02:23.787 LINK thread 00:02:23.787 CC test/blobfs/mkfs/mkfs.o 00:02:23.787 CC test/nvme/compliance/nvme_compliance.o 00:02:23.787 CC test/accel/dif/dif.o 00:02:23.787 CC test/nvme/cuse/cuse.o 00:02:23.787 CC test/nvme/fdp/fdp.o 00:02:23.787 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:23.787 CC test/nvme/fused_ordering/fused_ordering.o 00:02:23.787 CC test/lvol/esnap/esnap.o 00:02:23.787 LINK idxd_perf 00:02:24.046 LINK vhost 00:02:24.046 LINK startup 00:02:24.046 LINK connect_stress 00:02:24.046 LINK reserve 00:02:24.046 LINK boot_partition 00:02:24.046 LINK mkfs 00:02:24.046 LINK simple_copy 00:02:24.046 LINK doorbell_aers 00:02:24.046 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:24.046 CC examples/nvme/hotplug/hotplug.o 00:02:24.046 LINK err_injection 00:02:24.046 CC examples/nvme/abort/abort.o 00:02:24.046 CC examples/nvme/hello_world/hello_world.o 00:02:24.046 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:24.046 CC examples/nvme/arbitration/arbitration.o 00:02:24.046 CC examples/nvme/reconnect/reconnect.o 00:02:24.046 LINK fused_ordering 00:02:24.046 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:24.305 LINK aer 00:02:24.305 LINK memory_ut 00:02:24.305 LINK sgl 00:02:24.305 LINK fdp 00:02:24.305 LINK reset 00:02:24.305 LINK nvme_dp 00:02:24.305 LINK nvme_compliance 00:02:24.305 LINK overhead 00:02:24.305 LINK pmr_persistence 00:02:24.305 LINK cmb_copy 00:02:24.305 CC examples/accel/perf/accel_perf.o 00:02:24.305 LINK hello_world 00:02:24.305 LINK dif 00:02:24.305 CC examples/blob/hello_world/hello_blob.o 00:02:24.305 CC examples/blob/cli/blobcli.o 00:02:24.562 LINK arbitration 00:02:24.562 LINK hotplug 00:02:24.562 LINK abort 00:02:24.562 LINK reconnect 00:02:24.562 LINK hello_blob 00:02:24.821 LINK nvme_manage 00:02:24.821 CC test/bdev/bdevio/bdevio.o 00:02:24.821 LINK accel_perf 00:02:24.821 LINK blobcli 00:02:25.079 LINK iscsi_fuzz 00:02:25.079 CC examples/bdev/hello_world/hello_bdev.o 00:02:25.079 LINK bdevio 00:02:25.079 CC examples/bdev/bdevperf/bdevperf.o 00:02:25.337 LINK cuse 00:02:25.337 LINK hello_bdev 00:02:25.904 LINK bdevperf 00:02:26.471 CC examples/nvmf/nvmf/nvmf.o 00:02:26.728 LINK nvmf 00:02:29.256 LINK esnap 00:02:29.823 00:02:29.823 real 0m42.134s 00:02:29.823 user 7m24.050s 00:02:29.823 sys 1m49.658s 00:02:29.823 09:35:46 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:29.823 09:35:46 make -- common/autotest_common.sh@10 -- $ set +x 00:02:29.823 ************************************ 00:02:29.823 END TEST make 00:02:29.823 ************************************ 00:02:29.823 09:35:46 -- common/autotest_common.sh@1142 -- $ return 0 00:02:29.823 09:35:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:29.823 09:35:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:29.823 09:35:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:29.823 09:35:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.823 09:35:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:29.823 09:35:46 -- pm/common@44 -- $ pid=1664096 00:02:29.823 09:35:46 -- pm/common@50 -- $ kill -TERM 1664096 00:02:29.823 09:35:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.823 09:35:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:29.823 09:35:46 -- pm/common@44 -- $ pid=1664098 00:02:29.823 09:35:46 -- pm/common@50 -- $ kill -TERM 1664098 00:02:29.823 09:35:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.823 09:35:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:29.823 09:35:46 -- pm/common@44 -- $ pid=1664100 00:02:29.823 09:35:46 -- pm/common@50 -- $ kill -TERM 1664100 00:02:29.823 09:35:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.823 09:35:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:29.823 09:35:46 -- pm/common@44 -- $ pid=1664128 00:02:29.823 09:35:46 -- pm/common@50 -- $ sudo -E kill -TERM 1664128 00:02:29.824 09:35:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:29.824 09:35:46 -- nvmf/common.sh@7 -- # uname -s 00:02:29.824 09:35:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:29.824 09:35:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:29.824 09:35:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:29.824 09:35:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:29.824 09:35:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:29.824 09:35:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:29.824 09:35:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:29.824 09:35:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:29.824 09:35:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:29.824 09:35:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:29.824 09:35:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:29.824 09:35:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:29.824 09:35:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:29.824 09:35:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:29.824 09:35:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:29.824 09:35:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:29.824 09:35:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:29.824 09:35:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:29.824 09:35:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.824 09:35:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.824 09:35:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.824 09:35:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.824 09:35:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.824 09:35:46 -- paths/export.sh@5 -- # export PATH 00:02:29.824 09:35:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.824 09:35:46 -- nvmf/common.sh@47 -- # : 0 00:02:29.824 09:35:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:29.824 09:35:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:29.824 09:35:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:29.824 09:35:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:29.824 09:35:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:29.824 09:35:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:29.824 09:35:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:29.824 09:35:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:29.824 09:35:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:29.824 09:35:46 -- spdk/autotest.sh@32 -- # uname -s 00:02:29.824 09:35:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:29.824 09:35:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:29.824 09:35:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.824 09:35:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:29.824 09:35:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.824 09:35:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:29.824 09:35:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:29.824 09:35:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:29.824 09:35:46 -- spdk/autotest.sh@48 -- # udevadm_pid=1735583 00:02:29.824 09:35:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:29.824 09:35:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:29.824 09:35:46 -- pm/common@17 -- # local monitor 00:02:29.824 09:35:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.824 09:35:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.824 09:35:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.824 09:35:46 -- pm/common@21 -- # date +%s 00:02:29.824 09:35:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.824 09:35:46 -- pm/common@21 -- # date +%s 00:02:29.824 09:35:46 -- pm/common@25 -- # sleep 1 00:02:29.824 09:35:46 -- pm/common@21 -- # date +%s 00:02:29.824 09:35:46 -- pm/common@21 -- # date +%s 00:02:29.824 09:35:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721028946 00:02:29.824 09:35:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721028946 00:02:29.824 09:35:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721028946 00:02:29.824 09:35:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721028946 00:02:29.824 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721028946_collect-vmstat.pm.log 00:02:29.824 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721028946_collect-cpu-load.pm.log 00:02:29.824 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721028946_collect-cpu-temp.pm.log 00:02:29.824 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721028946_collect-bmc-pm.bmc.pm.log 00:02:30.759 09:35:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:30.759 09:35:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:30.759 09:35:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:30.759 09:35:47 -- common/autotest_common.sh@10 -- # set +x 00:02:30.759 09:35:47 -- spdk/autotest.sh@59 -- # create_test_list 00:02:30.759 09:35:47 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:30.759 09:35:47 -- common/autotest_common.sh@10 -- # set +x 00:02:30.759 09:35:47 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:30.759 09:35:47 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.759 09:35:47 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.759 09:35:47 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:30.759 09:35:47 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.759 09:35:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:30.759 09:35:47 -- common/autotest_common.sh@1455 -- # uname 00:02:30.759 09:35:47 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:30.759 09:35:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:30.759 09:35:47 -- common/autotest_common.sh@1475 -- # uname 00:02:30.759 09:35:47 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:30.759 09:35:47 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:30.759 09:35:47 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:30.759 09:35:47 -- spdk/autotest.sh@72 -- # hash lcov 00:02:30.759 09:35:47 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:30.759 09:35:47 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:30.759 --rc lcov_branch_coverage=1 00:02:30.759 --rc lcov_function_coverage=1 00:02:30.759 --rc genhtml_branch_coverage=1 00:02:30.759 --rc genhtml_function_coverage=1 00:02:30.759 --rc genhtml_legend=1 00:02:30.759 --rc geninfo_all_blocks=1 00:02:30.759 ' 00:02:30.759 09:35:47 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:30.759 --rc lcov_branch_coverage=1 00:02:30.759 --rc lcov_function_coverage=1 00:02:30.759 --rc genhtml_branch_coverage=1 00:02:30.759 --rc genhtml_function_coverage=1 00:02:30.759 --rc genhtml_legend=1 00:02:30.759 --rc geninfo_all_blocks=1 00:02:30.759 ' 00:02:30.759 09:35:47 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:30.759 --rc lcov_branch_coverage=1 00:02:30.759 --rc lcov_function_coverage=1 00:02:30.759 --rc genhtml_branch_coverage=1 00:02:30.759 --rc genhtml_function_coverage=1 00:02:30.759 --rc genhtml_legend=1 00:02:30.759 --rc geninfo_all_blocks=1 00:02:30.759 --no-external' 00:02:30.759 09:35:47 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:30.759 --rc lcov_branch_coverage=1 00:02:30.759 --rc lcov_function_coverage=1 00:02:30.759 --rc genhtml_branch_coverage=1 00:02:30.759 --rc genhtml_function_coverage=1 00:02:30.759 --rc genhtml_legend=1 00:02:30.759 --rc geninfo_all_blocks=1 00:02:30.759 --no-external' 00:02:30.759 09:35:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:31.018 lcov: LCOV version 1.14 00:02:31.018 09:35:47 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:36.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:36.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:36.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:36.314 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:58.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:58.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:05.099 09:36:21 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:05.099 09:36:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:05.099 09:36:21 -- common/autotest_common.sh@10 -- # set +x 00:03:05.099 09:36:21 -- spdk/autotest.sh@91 -- # rm -f 00:03:05.099 09:36:21 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.667 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:05.667 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:05.667 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:05.667 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:05.667 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:05.667 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:05.667 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:05.667 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:05.926 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:05.926 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:05.926 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:05.926 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:05.926 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:05.926 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:05.926 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:05.926 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:05.926 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:05.926 09:36:22 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:05.926 09:36:22 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:05.926 09:36:22 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:05.926 09:36:22 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:05.926 09:36:22 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:05.926 09:36:22 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:05.926 09:36:22 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:05.926 09:36:22 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.926 09:36:22 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:05.926 09:36:22 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:05.926 09:36:22 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:05.926 09:36:22 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:05.926 09:36:22 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:05.926 09:36:22 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:05.926 09:36:22 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.184 No valid GPT data, bailing 00:03:06.184 09:36:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.184 09:36:22 -- scripts/common.sh@391 -- # pt= 00:03:06.184 09:36:22 -- scripts/common.sh@392 -- # return 1 00:03:06.184 09:36:22 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.184 1+0 records in 00:03:06.184 1+0 records out 00:03:06.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00269496 s, 389 MB/s 00:03:06.184 09:36:22 -- spdk/autotest.sh@118 -- # sync 00:03:06.184 09:36:22 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.184 09:36:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.184 09:36:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:08.088 09:36:24 -- spdk/autotest.sh@124 -- # uname -s 00:03:08.088 09:36:24 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:08.088 09:36:24 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:08.088 09:36:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.088 09:36:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.088 09:36:24 -- common/autotest_common.sh@10 -- # set +x 00:03:08.088 ************************************ 00:03:08.088 START TEST setup.sh 00:03:08.088 ************************************ 00:03:08.088 09:36:24 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:08.088 * Looking for test storage... 00:03:08.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:08.088 09:36:24 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:08.088 09:36:24 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:08.088 09:36:24 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:08.088 09:36:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.088 09:36:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.088 09:36:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:08.088 ************************************ 00:03:08.088 START TEST acl 00:03:08.088 ************************************ 00:03:08.088 09:36:24 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:08.088 * Looking for test storage... 00:03:08.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:08.088 09:36:24 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:08.088 09:36:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:08.088 09:36:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:08.088 09:36:24 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:08.088 09:36:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:08.088 09:36:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:08.088 09:36:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:08.088 09:36:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.088 09:36:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:08.088 09:36:24 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:08.088 09:36:24 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:08.088 09:36:24 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:08.088 09:36:24 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:08.088 09:36:24 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:08.088 09:36:24 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.088 09:36:24 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.463 09:36:26 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:09.463 09:36:26 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:09.463 09:36:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.463 09:36:26 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:09.463 09:36:26 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.463 09:36:26 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:10.397 Hugepages 00:03:10.397 node hugesize free / total 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 00:03:10.397 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.397 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:10.656 09:36:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:10.656 09:36:27 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.656 09:36:27 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.656 09:36:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:10.656 ************************************ 00:03:10.656 START TEST denied 00:03:10.656 ************************************ 00:03:10.656 09:36:27 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:10.656 09:36:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:10.656 09:36:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:10.656 09:36:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:10.656 09:36:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.656 09:36:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.034 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:12.034 09:36:28 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:12.034 09:36:28 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:12.034 09:36:28 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:12.034 09:36:28 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:12.034 09:36:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:12.034 09:36:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:12.034 09:36:28 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:12.034 09:36:28 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:12.034 09:36:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.034 09:36:28 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.563 00:03:14.563 real 0m3.795s 00:03:14.563 user 0m1.106s 00:03:14.563 sys 0m1.760s 00:03:14.563 09:36:31 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.563 09:36:31 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:14.563 ************************************ 00:03:14.563 END TEST denied 00:03:14.563 ************************************ 00:03:14.563 09:36:31 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:14.563 09:36:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:14.563 09:36:31 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.563 09:36:31 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.563 09:36:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:14.563 ************************************ 00:03:14.563 START TEST allowed 00:03:14.563 ************************************ 00:03:14.563 09:36:31 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:14.563 09:36:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:14.563 09:36:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:14.563 09:36:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:14.563 09:36:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.563 09:36:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.105 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:17.105 09:36:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:17.105 09:36:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:17.105 09:36:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:17.105 09:36:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.105 09:36:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.526 00:03:18.526 real 0m3.738s 00:03:18.526 user 0m0.956s 00:03:18.526 sys 0m1.637s 00:03:18.526 09:36:34 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.526 09:36:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:18.526 ************************************ 00:03:18.526 END TEST allowed 00:03:18.526 ************************************ 00:03:18.526 09:36:34 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:18.526 00:03:18.526 real 0m10.256s 00:03:18.526 user 0m3.112s 00:03:18.526 sys 0m5.133s 00:03:18.526 09:36:34 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.526 09:36:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.526 ************************************ 00:03:18.526 END TEST acl 00:03:18.527 ************************************ 00:03:18.527 09:36:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:18.527 09:36:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.527 09:36:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.527 09:36:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.527 09:36:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.527 ************************************ 00:03:18.527 START TEST hugepages 00:03:18.527 ************************************ 00:03:18.527 09:36:34 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.527 * Looking for test storage... 00:03:18.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42277364 kB' 'MemAvailable: 45784520 kB' 'Buffers: 2704 kB' 'Cached: 11709856 kB' 'SwapCached: 0 kB' 'Active: 8703832 kB' 'Inactive: 3506596 kB' 'Active(anon): 8309240 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501224 kB' 'Mapped: 188176 kB' 'Shmem: 7811372 kB' 'KReclaimable: 199612 kB' 'Slab: 572564 kB' 'SReclaimable: 199612 kB' 'SUnreclaim: 372952 kB' 'KernelStack: 12928 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 9427332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.527 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.528 09:36:35 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:18.528 09:36:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.528 09:36:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.528 09:36:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.528 ************************************ 00:03:18.528 START TEST default_setup 00:03:18.528 ************************************ 00:03:18.528 09:36:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:18.528 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.529 09:36:35 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.462 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.462 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.462 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.462 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.462 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.462 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.462 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.462 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:19.720 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.720 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.720 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.720 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.720 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.720 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.720 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.720 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:20.661 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44387500 kB' 'MemAvailable: 47894624 kB' 'Buffers: 2704 kB' 'Cached: 11709952 kB' 'SwapCached: 0 kB' 'Active: 8722376 kB' 'Inactive: 3506596 kB' 'Active(anon): 8327784 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520084 kB' 'Mapped: 187868 kB' 'Shmem: 7811468 kB' 'KReclaimable: 199548 kB' 'Slab: 572088 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372540 kB' 'KernelStack: 12832 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9444748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.662 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44388228 kB' 'MemAvailable: 47895352 kB' 'Buffers: 2704 kB' 'Cached: 11709952 kB' 'SwapCached: 0 kB' 'Active: 8722252 kB' 'Inactive: 3506596 kB' 'Active(anon): 8327660 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519580 kB' 'Mapped: 187972 kB' 'Shmem: 7811468 kB' 'KReclaimable: 199548 kB' 'Slab: 572156 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372608 kB' 'KernelStack: 12848 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9444764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.663 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.664 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44388576 kB' 'MemAvailable: 47895700 kB' 'Buffers: 2704 kB' 'Cached: 11709972 kB' 'SwapCached: 0 kB' 'Active: 8721632 kB' 'Inactive: 3506596 kB' 'Active(anon): 8327040 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518880 kB' 'Mapped: 187892 kB' 'Shmem: 7811488 kB' 'KReclaimable: 199548 kB' 'Slab: 572128 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372580 kB' 'KernelStack: 12816 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9444788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.665 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.666 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.667 nr_hugepages=1024 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.667 resv_hugepages=0 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.667 surplus_hugepages=0 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.667 anon_hugepages=0 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44387820 kB' 'MemAvailable: 47894944 kB' 'Buffers: 2704 kB' 'Cached: 11709992 kB' 'SwapCached: 0 kB' 'Active: 8721628 kB' 'Inactive: 3506596 kB' 'Active(anon): 8327036 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518844 kB' 'Mapped: 187892 kB' 'Shmem: 7811508 kB' 'KReclaimable: 199548 kB' 'Slab: 572128 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372580 kB' 'KernelStack: 12800 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9444808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.667 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.668 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20776164 kB' 'MemUsed: 12100776 kB' 'SwapCached: 0 kB' 'Active: 5582408 kB' 'Inactive: 3265212 kB' 'Active(anon): 5393836 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8583196 kB' 'Mapped: 73416 kB' 'AnonPages: 267572 kB' 'Shmem: 5129412 kB' 'KernelStack: 6952 kB' 'PageTables: 4776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118396 kB' 'Slab: 319748 kB' 'SReclaimable: 118396 kB' 'SUnreclaim: 201352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.669 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.929 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:20.930 node0=1024 expecting 1024 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:20.930 00:03:20.930 real 0m2.386s 00:03:20.930 user 0m0.621s 00:03:20.930 sys 0m0.912s 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.930 09:36:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:20.930 ************************************ 00:03:20.930 END TEST default_setup 00:03:20.930 ************************************ 00:03:20.930 09:36:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:20.930 09:36:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:20.930 09:36:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.930 09:36:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.930 09:36:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.930 ************************************ 00:03:20.930 START TEST per_node_1G_alloc 00:03:20.930 ************************************ 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.930 09:36:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.865 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.865 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.866 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.866 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.866 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.866 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.866 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.866 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.866 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.866 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.866 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.866 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.866 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.866 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.866 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.866 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.866 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.128 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44403956 kB' 'MemAvailable: 47911080 kB' 'Buffers: 2704 kB' 'Cached: 11710060 kB' 'SwapCached: 0 kB' 'Active: 8721960 kB' 'Inactive: 3506596 kB' 'Active(anon): 8327368 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518908 kB' 'Mapped: 187964 kB' 'Shmem: 7811576 kB' 'KReclaimable: 199548 kB' 'Slab: 572084 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372536 kB' 'KernelStack: 12912 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9444860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.129 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44407516 kB' 'MemAvailable: 47914640 kB' 'Buffers: 2704 kB' 'Cached: 11710060 kB' 'SwapCached: 0 kB' 'Active: 8722316 kB' 'Inactive: 3506596 kB' 'Active(anon): 8327724 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519276 kB' 'Mapped: 187904 kB' 'Shmem: 7811576 kB' 'KReclaimable: 199548 kB' 'Slab: 572068 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372520 kB' 'KernelStack: 12960 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9444876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.130 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.131 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44408328 kB' 'MemAvailable: 47915452 kB' 'Buffers: 2704 kB' 'Cached: 11710080 kB' 'SwapCached: 0 kB' 'Active: 8722288 kB' 'Inactive: 3506596 kB' 'Active(anon): 8327696 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519252 kB' 'Mapped: 187904 kB' 'Shmem: 7811596 kB' 'KReclaimable: 199548 kB' 'Slab: 572108 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372560 kB' 'KernelStack: 12960 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9444900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.132 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.133 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.134 nr_hugepages=1024 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.134 resv_hugepages=0 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.134 surplus_hugepages=0 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.134 anon_hugepages=0 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44408328 kB' 'MemAvailable: 47915452 kB' 'Buffers: 2704 kB' 'Cached: 11710104 kB' 'SwapCached: 0 kB' 'Active: 8722328 kB' 'Inactive: 3506596 kB' 'Active(anon): 8327736 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519252 kB' 'Mapped: 187904 kB' 'Shmem: 7811620 kB' 'KReclaimable: 199548 kB' 'Slab: 572108 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372560 kB' 'KernelStack: 12960 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9444924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.134 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.135 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21835756 kB' 'MemUsed: 11041184 kB' 'SwapCached: 0 kB' 'Active: 5583744 kB' 'Inactive: 3265212 kB' 'Active(anon): 5395172 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8583300 kB' 'Mapped: 73428 kB' 'AnonPages: 268820 kB' 'Shmem: 5129516 kB' 'KernelStack: 7096 kB' 'PageTables: 5236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118396 kB' 'Slab: 319652 kB' 'SReclaimable: 118396 kB' 'SUnreclaim: 201256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.136 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22572080 kB' 'MemUsed: 5092672 kB' 'SwapCached: 0 kB' 'Active: 3138780 kB' 'Inactive: 241384 kB' 'Active(anon): 2932760 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241384 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3129524 kB' 'Mapped: 114476 kB' 'AnonPages: 250648 kB' 'Shmem: 2682120 kB' 'KernelStack: 5880 kB' 'PageTables: 3192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81152 kB' 'Slab: 252448 kB' 'SReclaimable: 81152 kB' 'SUnreclaim: 171296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.137 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.138 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.397 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.398 node0=512 expecting 512 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.398 node1=512 expecting 512 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.398 00:03:22.398 real 0m1.421s 00:03:22.398 user 0m0.569s 00:03:22.398 sys 0m0.814s 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.398 09:36:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.398 ************************************ 00:03:22.398 END TEST per_node_1G_alloc 00:03:22.398 ************************************ 00:03:22.398 09:36:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:22.398 09:36:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:22.398 09:36:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.398 09:36:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.398 09:36:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.398 ************************************ 00:03:22.398 START TEST even_2G_alloc 00:03:22.398 ************************************ 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.398 09:36:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.775 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.775 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.775 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.775 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.775 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.775 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.775 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.775 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.775 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.775 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.775 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.775 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.775 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.775 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.775 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.775 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.775 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.775 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44397296 kB' 'MemAvailable: 47904420 kB' 'Buffers: 2704 kB' 'Cached: 11710204 kB' 'SwapCached: 0 kB' 'Active: 8722284 kB' 'Inactive: 3506596 kB' 'Active(anon): 8327692 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519208 kB' 'Mapped: 188012 kB' 'Shmem: 7811720 kB' 'KReclaimable: 199548 kB' 'Slab: 572120 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372572 kB' 'KernelStack: 12784 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9445256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.776 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44396792 kB' 'MemAvailable: 47903916 kB' 'Buffers: 2704 kB' 'Cached: 11710204 kB' 'SwapCached: 0 kB' 'Active: 8722012 kB' 'Inactive: 3506596 kB' 'Active(anon): 8327420 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518912 kB' 'Mapped: 187988 kB' 'Shmem: 7811720 kB' 'KReclaimable: 199548 kB' 'Slab: 572120 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372572 kB' 'KernelStack: 12832 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9445272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.777 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.778 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44394924 kB' 'MemAvailable: 47902048 kB' 'Buffers: 2704 kB' 'Cached: 11710224 kB' 'SwapCached: 0 kB' 'Active: 8723888 kB' 'Inactive: 3506596 kB' 'Active(anon): 8329296 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520740 kB' 'Mapped: 188344 kB' 'Shmem: 7811740 kB' 'KReclaimable: 199548 kB' 'Slab: 572112 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 372564 kB' 'KernelStack: 12784 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9448764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.779 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.780 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.781 nr_hugepages=1024 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.781 resv_hugepages=0 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.781 surplus_hugepages=0 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.781 anon_hugepages=0 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44388372 kB' 'MemAvailable: 47895464 kB' 'Buffers: 2704 kB' 'Cached: 11710244 kB' 'SwapCached: 0 kB' 'Active: 8726556 kB' 'Inactive: 3506596 kB' 'Active(anon): 8331964 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523416 kB' 'Mapped: 188344 kB' 'Shmem: 7811760 kB' 'KReclaimable: 199484 kB' 'Slab: 572048 kB' 'SReclaimable: 199484 kB' 'SUnreclaim: 372564 kB' 'KernelStack: 12832 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9451436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196116 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.781 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.782 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21837224 kB' 'MemUsed: 11039716 kB' 'SwapCached: 0 kB' 'Active: 5582568 kB' 'Inactive: 3265212 kB' 'Active(anon): 5393996 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8583428 kB' 'Mapped: 74260 kB' 'AnonPages: 267456 kB' 'Shmem: 5129644 kB' 'KernelStack: 6936 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118396 kB' 'Slab: 319800 kB' 'SReclaimable: 118396 kB' 'SUnreclaim: 201404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.783 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22551148 kB' 'MemUsed: 5113604 kB' 'SwapCached: 0 kB' 'Active: 3141972 kB' 'Inactive: 241384 kB' 'Active(anon): 2935952 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241384 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3129544 kB' 'Mapped: 114468 kB' 'AnonPages: 253912 kB' 'Shmem: 2682140 kB' 'KernelStack: 5848 kB' 'PageTables: 3152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81088 kB' 'Slab: 252248 kB' 'SReclaimable: 81088 kB' 'SUnreclaim: 171160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.784 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.785 node0=512 expecting 512 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:23.785 node1=512 expecting 512 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:23.785 00:03:23.785 real 0m1.508s 00:03:23.785 user 0m0.650s 00:03:23.785 sys 0m0.819s 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.785 09:36:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.785 ************************************ 00:03:23.785 END TEST even_2G_alloc 00:03:23.785 ************************************ 00:03:23.785 09:36:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:23.785 09:36:40 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:23.785 09:36:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.785 09:36:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.785 09:36:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.785 ************************************ 00:03:23.785 START TEST odd_alloc 00:03:23.785 ************************************ 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:23.785 09:36:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:23.786 09:36:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.786 09:36:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.167 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.167 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.167 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.167 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.167 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.167 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.167 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.167 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.167 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.167 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.167 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.167 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.167 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.167 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.167 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.167 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.167 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.167 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44376260 kB' 'MemAvailable: 47883336 kB' 'Buffers: 2704 kB' 'Cached: 11710332 kB' 'SwapCached: 0 kB' 'Active: 8719776 kB' 'Inactive: 3506596 kB' 'Active(anon): 8325184 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516072 kB' 'Mapped: 187024 kB' 'Shmem: 7811848 kB' 'KReclaimable: 199452 kB' 'Slab: 571720 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372268 kB' 'KernelStack: 13168 kB' 'PageTables: 9648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 9432444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196416 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.168 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44386892 kB' 'MemAvailable: 47893968 kB' 'Buffers: 2704 kB' 'Cached: 11710336 kB' 'SwapCached: 0 kB' 'Active: 8719404 kB' 'Inactive: 3506596 kB' 'Active(anon): 8324812 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516128 kB' 'Mapped: 186988 kB' 'Shmem: 7811852 kB' 'KReclaimable: 199452 kB' 'Slab: 571680 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372228 kB' 'KernelStack: 13024 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 9430092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196320 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.169 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.170 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44386620 kB' 'MemAvailable: 47893696 kB' 'Buffers: 2704 kB' 'Cached: 11710352 kB' 'SwapCached: 0 kB' 'Active: 8717684 kB' 'Inactive: 3506596 kB' 'Active(anon): 8323092 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514348 kB' 'Mapped: 186988 kB' 'Shmem: 7811868 kB' 'KReclaimable: 199452 kB' 'Slab: 571748 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372296 kB' 'KernelStack: 12640 kB' 'PageTables: 7356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 9430112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.171 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:25.172 nr_hugepages=1025 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.172 resv_hugepages=0 00:03:25.172 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.173 surplus_hugepages=0 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.173 anon_hugepages=0 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44386628 kB' 'MemAvailable: 47893704 kB' 'Buffers: 2704 kB' 'Cached: 11710376 kB' 'SwapCached: 0 kB' 'Active: 8717732 kB' 'Inactive: 3506596 kB' 'Active(anon): 8323140 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514428 kB' 'Mapped: 186972 kB' 'Shmem: 7811892 kB' 'KReclaimable: 199452 kB' 'Slab: 571876 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372424 kB' 'KernelStack: 12752 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 9430136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.173 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21837152 kB' 'MemUsed: 11039788 kB' 'SwapCached: 0 kB' 'Active: 5581828 kB' 'Inactive: 3265212 kB' 'Active(anon): 5393256 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8583548 kB' 'Mapped: 72724 kB' 'AnonPages: 266632 kB' 'Shmem: 5129764 kB' 'KernelStack: 6888 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118364 kB' 'Slab: 319636 kB' 'SReclaimable: 118364 kB' 'SUnreclaim: 201272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.174 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.175 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22548484 kB' 'MemUsed: 5116268 kB' 'SwapCached: 0 kB' 'Active: 3135948 kB' 'Inactive: 241384 kB' 'Active(anon): 2929928 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241384 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3129552 kB' 'Mapped: 114248 kB' 'AnonPages: 247860 kB' 'Shmem: 2682148 kB' 'KernelStack: 5896 kB' 'PageTables: 3180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81088 kB' 'Slab: 252240 kB' 'SReclaimable: 81088 kB' 'SUnreclaim: 171152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.176 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:25.177 node0=512 expecting 513 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:25.177 node1=513 expecting 512 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:25.177 00:03:25.177 real 0m1.418s 00:03:25.177 user 0m0.584s 00:03:25.177 sys 0m0.796s 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.177 09:36:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.177 ************************************ 00:03:25.177 END TEST odd_alloc 00:03:25.177 ************************************ 00:03:25.436 09:36:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:25.436 09:36:41 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:25.436 09:36:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.436 09:36:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.436 09:36:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.436 ************************************ 00:03:25.436 START TEST custom_alloc 00:03:25.436 ************************************ 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.436 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.437 09:36:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.819 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.819 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.819 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.819 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.819 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.819 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.819 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.819 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.819 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.819 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.819 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.819 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.819 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.819 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.819 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.819 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.819 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.819 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43351140 kB' 'MemAvailable: 46858216 kB' 'Buffers: 2704 kB' 'Cached: 11710472 kB' 'SwapCached: 0 kB' 'Active: 8717916 kB' 'Inactive: 3506596 kB' 'Active(anon): 8323324 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514556 kB' 'Mapped: 187044 kB' 'Shmem: 7811988 kB' 'KReclaimable: 199452 kB' 'Slab: 571660 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372208 kB' 'KernelStack: 12816 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 9430340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.820 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43352636 kB' 'MemAvailable: 46859712 kB' 'Buffers: 2704 kB' 'Cached: 11710472 kB' 'SwapCached: 0 kB' 'Active: 8717800 kB' 'Inactive: 3506596 kB' 'Active(anon): 8323208 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514452 kB' 'Mapped: 186984 kB' 'Shmem: 7811988 kB' 'KReclaimable: 199452 kB' 'Slab: 571660 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372208 kB' 'KernelStack: 12816 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 9430356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.821 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.822 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43351380 kB' 'MemAvailable: 46858456 kB' 'Buffers: 2704 kB' 'Cached: 11710492 kB' 'SwapCached: 0 kB' 'Active: 8717800 kB' 'Inactive: 3506596 kB' 'Active(anon): 8323208 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514424 kB' 'Mapped: 186984 kB' 'Shmem: 7812008 kB' 'KReclaimable: 199452 kB' 'Slab: 571636 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372184 kB' 'KernelStack: 12800 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 9430380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.823 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:26.824 nr_hugepages=1536 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.824 resv_hugepages=0 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.824 surplus_hugepages=0 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.824 anon_hugepages=0 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.824 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43351380 kB' 'MemAvailable: 46858456 kB' 'Buffers: 2704 kB' 'Cached: 11710508 kB' 'SwapCached: 0 kB' 'Active: 8717804 kB' 'Inactive: 3506596 kB' 'Active(anon): 8323212 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514416 kB' 'Mapped: 186984 kB' 'Shmem: 7812024 kB' 'KReclaimable: 199452 kB' 'Slab: 571684 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372232 kB' 'KernelStack: 12800 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 9430400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.825 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21835416 kB' 'MemUsed: 11041524 kB' 'SwapCached: 0 kB' 'Active: 5582536 kB' 'Inactive: 3265212 kB' 'Active(anon): 5393964 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8583612 kB' 'Mapped: 72736 kB' 'AnonPages: 267324 kB' 'Shmem: 5129828 kB' 'KernelStack: 6920 kB' 'PageTables: 4636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118364 kB' 'Slab: 319484 kB' 'SReclaimable: 118364 kB' 'SUnreclaim: 201120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.826 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 21517364 kB' 'MemUsed: 6147388 kB' 'SwapCached: 0 kB' 'Active: 3135332 kB' 'Inactive: 241384 kB' 'Active(anon): 2929312 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241384 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3129648 kB' 'Mapped: 114248 kB' 'AnonPages: 247112 kB' 'Shmem: 2682244 kB' 'KernelStack: 5880 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81088 kB' 'Slab: 252200 kB' 'SReclaimable: 81088 kB' 'SUnreclaim: 171112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.829 node0=512 expecting 512 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:26.829 node1=1024 expecting 1024 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:26.829 00:03:26.829 real 0m1.566s 00:03:26.829 user 0m0.618s 00:03:26.829 sys 0m0.912s 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.829 09:36:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.829 ************************************ 00:03:26.829 END TEST custom_alloc 00:03:26.829 ************************************ 00:03:26.829 09:36:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.829 09:36:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:26.829 09:36:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.829 09:36:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.829 09:36:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.829 ************************************ 00:03:26.829 START TEST no_shrink_alloc 00:03:26.829 ************************************ 00:03:26.829 09:36:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:26.829 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:26.829 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.829 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.829 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:26.829 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.087 09:36:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.023 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.023 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.023 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.023 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.023 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.023 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.023 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.023 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.023 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.023 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.023 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.023 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.023 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.023 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.023 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.023 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.023 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44364872 kB' 'MemAvailable: 47871948 kB' 'Buffers: 2704 kB' 'Cached: 11710596 kB' 'SwapCached: 0 kB' 'Active: 8718532 kB' 'Inactive: 3506596 kB' 'Active(anon): 8323940 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514992 kB' 'Mapped: 187132 kB' 'Shmem: 7812112 kB' 'KReclaimable: 199452 kB' 'Slab: 571704 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372252 kB' 'KernelStack: 12784 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9430796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44367492 kB' 'MemAvailable: 47874568 kB' 'Buffers: 2704 kB' 'Cached: 11710600 kB' 'SwapCached: 0 kB' 'Active: 8718740 kB' 'Inactive: 3506596 kB' 'Active(anon): 8324148 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515220 kB' 'Mapped: 187072 kB' 'Shmem: 7812116 kB' 'KReclaimable: 199452 kB' 'Slab: 571704 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372252 kB' 'KernelStack: 12832 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9430812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44367972 kB' 'MemAvailable: 47875048 kB' 'Buffers: 2704 kB' 'Cached: 11710604 kB' 'SwapCached: 0 kB' 'Active: 8718016 kB' 'Inactive: 3506596 kB' 'Active(anon): 8323424 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514472 kB' 'Mapped: 186996 kB' 'Shmem: 7812120 kB' 'KReclaimable: 199452 kB' 'Slab: 571684 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372232 kB' 'KernelStack: 12832 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9430836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.287 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.288 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.289 nr_hugepages=1024 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.289 resv_hugepages=0 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.289 surplus_hugepages=0 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.289 anon_hugepages=0 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44366212 kB' 'MemAvailable: 47873288 kB' 'Buffers: 2704 kB' 'Cached: 11710644 kB' 'SwapCached: 0 kB' 'Active: 8720808 kB' 'Inactive: 3506596 kB' 'Active(anon): 8326216 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517232 kB' 'Mapped: 187432 kB' 'Shmem: 7812160 kB' 'KReclaimable: 199452 kB' 'Slab: 571684 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372232 kB' 'KernelStack: 12800 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9434060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.289 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.290 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20760720 kB' 'MemUsed: 12116220 kB' 'SwapCached: 0 kB' 'Active: 5588028 kB' 'Inactive: 3265212 kB' 'Active(anon): 5399456 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8583620 kB' 'Mapped: 73184 kB' 'AnonPages: 272732 kB' 'Shmem: 5129836 kB' 'KernelStack: 6920 kB' 'PageTables: 4648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118364 kB' 'Slab: 319464 kB' 'SReclaimable: 118364 kB' 'SUnreclaim: 201100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.291 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.292 node0=1024 expecting 1024 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.292 09:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.225 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:29.225 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.225 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:29.492 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:29.492 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:29.492 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:29.492 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:29.492 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:29.492 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:29.492 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:29.492 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:29.492 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:29.492 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:29.492 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:29.493 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:29.493 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:29.493 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:29.493 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44379112 kB' 'MemAvailable: 47886188 kB' 'Buffers: 2704 kB' 'Cached: 11710708 kB' 'SwapCached: 0 kB' 'Active: 8724532 kB' 'Inactive: 3506596 kB' 'Active(anon): 8329940 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520940 kB' 'Mapped: 187848 kB' 'Shmem: 7812224 kB' 'KReclaimable: 199452 kB' 'Slab: 571784 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372332 kB' 'KernelStack: 12784 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9437136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196084 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.493 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44386544 kB' 'MemAvailable: 47893620 kB' 'Buffers: 2704 kB' 'Cached: 11710712 kB' 'SwapCached: 0 kB' 'Active: 8720092 kB' 'Inactive: 3506596 kB' 'Active(anon): 8325500 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516504 kB' 'Mapped: 187444 kB' 'Shmem: 7812228 kB' 'KReclaimable: 199452 kB' 'Slab: 571776 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372324 kB' 'KernelStack: 12800 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9433448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.494 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.495 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44382336 kB' 'MemAvailable: 47889412 kB' 'Buffers: 2704 kB' 'Cached: 11710728 kB' 'SwapCached: 0 kB' 'Active: 8724812 kB' 'Inactive: 3506596 kB' 'Active(anon): 8330220 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521368 kB' 'Mapped: 187460 kB' 'Shmem: 7812244 kB' 'KReclaimable: 199452 kB' 'Slab: 571868 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372416 kB' 'KernelStack: 12832 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9439544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.496 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.497 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.498 nr_hugepages=1024 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.498 resv_hugepages=0 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.498 surplus_hugepages=0 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.498 anon_hugepages=0 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44383884 kB' 'MemAvailable: 47890960 kB' 'Buffers: 2704 kB' 'Cached: 11710752 kB' 'SwapCached: 0 kB' 'Active: 8724664 kB' 'Inactive: 3506596 kB' 'Active(anon): 8330072 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521112 kB' 'Mapped: 187928 kB' 'Shmem: 7812268 kB' 'KReclaimable: 199452 kB' 'Slab: 571868 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 372416 kB' 'KernelStack: 12976 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9439568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 13887488 kB' 'DirectMap1G: 53477376 kB' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.498 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.499 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.758 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20764192 kB' 'MemUsed: 12112748 kB' 'SwapCached: 0 kB' 'Active: 5582424 kB' 'Inactive: 3265212 kB' 'Active(anon): 5393852 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8583664 kB' 'Mapped: 72788 kB' 'AnonPages: 267140 kB' 'Shmem: 5129880 kB' 'KernelStack: 6872 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118364 kB' 'Slab: 319428 kB' 'SReclaimable: 118364 kB' 'SUnreclaim: 201064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.759 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.761 node0=1024 expecting 1024 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.761 00:03:29.761 real 0m2.710s 00:03:29.761 user 0m1.089s 00:03:29.761 sys 0m1.535s 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.761 09:36:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:29.761 ************************************ 00:03:29.761 END TEST no_shrink_alloc 00:03:29.761 ************************************ 00:03:29.761 09:36:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:29.761 09:36:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:29.761 00:03:29.761 real 0m11.390s 00:03:29.761 user 0m4.308s 00:03:29.761 sys 0m6.015s 00:03:29.761 09:36:46 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.761 09:36:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.761 ************************************ 00:03:29.761 END TEST hugepages 00:03:29.761 ************************************ 00:03:29.761 09:36:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:29.761 09:36:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:29.761 09:36:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.761 09:36:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.761 09:36:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:29.761 ************************************ 00:03:29.761 START TEST driver 00:03:29.761 ************************************ 00:03:29.761 09:36:46 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:29.761 * Looking for test storage... 00:03:29.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:29.761 09:36:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:29.761 09:36:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.761 09:36:46 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.288 09:36:48 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:32.288 09:36:48 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.288 09:36:48 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.288 09:36:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:32.288 ************************************ 00:03:32.288 START TEST guess_driver 00:03:32.288 ************************************ 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:32.288 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:32.288 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:32.288 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:32.288 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:32.288 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:32.288 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:32.288 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:32.288 Looking for driver=vfio-pci 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.288 09:36:48 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.664 09:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.601 09:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.601 09:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.601 09:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.601 09:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:34.601 09:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:34.601 09:36:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.601 09:36:51 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.151 00:03:37.151 real 0m4.873s 00:03:37.151 user 0m1.110s 00:03:37.151 sys 0m1.883s 00:03:37.151 09:36:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.151 09:36:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.151 ************************************ 00:03:37.151 END TEST guess_driver 00:03:37.151 ************************************ 00:03:37.151 09:36:53 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:37.151 00:03:37.151 real 0m7.374s 00:03:37.151 user 0m1.702s 00:03:37.151 sys 0m2.838s 00:03:37.151 09:36:53 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.151 09:36:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.151 ************************************ 00:03:37.151 END TEST driver 00:03:37.151 ************************************ 00:03:37.151 09:36:53 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:37.151 09:36:53 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:37.151 09:36:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.151 09:36:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.151 09:36:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.151 ************************************ 00:03:37.151 START TEST devices 00:03:37.151 ************************************ 00:03:37.151 09:36:53 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:37.151 * Looking for test storage... 00:03:37.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:37.151 09:36:53 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:37.151 09:36:53 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:37.151 09:36:53 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.151 09:36:53 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:39.049 09:36:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:39.049 09:36:55 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:39.049 No valid GPT data, bailing 00:03:39.049 09:36:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:39.049 09:36:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:39.049 09:36:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:39.049 09:36:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:39.049 09:36:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:39.049 09:36:55 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:39.049 09:36:55 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.049 09:36:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:39.049 ************************************ 00:03:39.049 START TEST nvme_mount 00:03:39.049 ************************************ 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:39.049 09:36:55 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:39.986 Creating new GPT entries in memory. 00:03:39.986 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:39.986 other utilities. 00:03:39.986 09:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:39.986 09:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.986 09:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:39.986 09:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:39.986 09:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:40.923 Creating new GPT entries in memory. 00:03:40.923 The operation has completed successfully. 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1756253 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.923 09:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.858 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:42.117 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:42.117 09:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:42.376 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:42.376 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:42.376 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:42.376 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.376 09:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.754 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.755 09:37:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.690 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.691 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.950 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.950 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:44.950 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:44.950 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:44.950 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.950 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.950 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.950 09:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.950 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:44.950 00:03:44.950 real 0m6.198s 00:03:44.950 user 0m1.456s 00:03:44.950 sys 0m2.284s 00:03:44.950 09:37:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.950 09:37:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:44.950 ************************************ 00:03:44.950 END TEST nvme_mount 00:03:44.950 ************************************ 00:03:44.950 09:37:01 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:44.950 09:37:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:44.950 09:37:01 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.950 09:37:01 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.950 09:37:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.950 ************************************ 00:03:44.950 START TEST dm_mount 00:03:44.950 ************************************ 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:44.950 09:37:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:46.327 Creating new GPT entries in memory. 00:03:46.327 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:46.327 other utilities. 00:03:46.327 09:37:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:46.327 09:37:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.327 09:37:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.327 09:37:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.327 09:37:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:47.263 Creating new GPT entries in memory. 00:03:47.263 The operation has completed successfully. 00:03:47.263 09:37:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:47.263 09:37:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.263 09:37:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.263 09:37:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.263 09:37:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:48.262 The operation has completed successfully. 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1758632 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.262 09:37:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.198 09:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.457 09:37:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.392 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:50.652 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:50.652 00:03:50.652 real 0m5.664s 00:03:50.652 user 0m0.910s 00:03:50.652 sys 0m1.596s 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.652 09:37:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:50.652 ************************************ 00:03:50.652 END TEST dm_mount 00:03:50.652 ************************************ 00:03:50.652 09:37:07 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:50.652 09:37:07 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:50.652 09:37:07 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:50.652 09:37:07 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.652 09:37:07 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.652 09:37:07 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:50.652 09:37:07 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.652 09:37:07 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.911 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:50.911 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:50.911 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.911 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.911 09:37:07 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:50.911 09:37:07 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.911 09:37:07 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:50.911 09:37:07 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.911 09:37:07 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.911 09:37:07 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.911 09:37:07 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:50.911 00:03:50.911 real 0m13.828s 00:03:50.911 user 0m3.021s 00:03:50.911 sys 0m4.953s 00:03:50.911 09:37:07 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.911 09:37:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.911 ************************************ 00:03:50.911 END TEST devices 00:03:50.911 ************************************ 00:03:50.911 09:37:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:50.911 00:03:50.911 real 0m43.089s 00:03:50.911 user 0m12.237s 00:03:50.911 sys 0m19.102s 00:03:50.911 09:37:07 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.911 09:37:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.911 ************************************ 00:03:50.911 END TEST setup.sh 00:03:50.911 ************************************ 00:03:50.911 09:37:07 -- common/autotest_common.sh@1142 -- # return 0 00:03:50.911 09:37:07 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:52.285 Hugepages 00:03:52.285 node hugesize free / total 00:03:52.285 node0 1048576kB 0 / 0 00:03:52.285 node0 2048kB 2048 / 2048 00:03:52.285 node1 1048576kB 0 / 0 00:03:52.285 node1 2048kB 0 / 0 00:03:52.285 00:03:52.285 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:52.285 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:52.285 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:52.285 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:52.285 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:52.285 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:52.285 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:52.285 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:52.285 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:52.285 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:52.285 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:52.285 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:52.285 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:52.286 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:52.286 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:52.286 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:52.286 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:52.286 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:52.286 09:37:08 -- spdk/autotest.sh@130 -- # uname -s 00:03:52.286 09:37:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:52.286 09:37:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:52.286 09:37:08 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.661 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:53.661 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:53.661 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:53.661 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:53.662 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:53.662 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:53.662 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:53.662 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:53.662 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:53.662 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:53.662 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:53.662 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:53.662 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:53.662 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:53.662 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:53.662 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:54.600 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:54.600 09:37:11 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:55.534 09:37:12 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:55.534 09:37:12 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:55.534 09:37:12 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:55.534 09:37:12 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:55.534 09:37:12 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:55.534 09:37:12 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:55.534 09:37:12 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.534 09:37:12 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:55.534 09:37:12 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:55.534 09:37:12 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:55.534 09:37:12 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:55.534 09:37:12 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.907 Waiting for block devices as requested 00:03:56.907 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:56.907 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:56.907 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:57.166 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:57.166 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:57.166 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:57.166 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:57.492 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:57.492 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:57.492 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:57.492 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:57.492 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:57.791 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:57.791 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:57.791 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:57.791 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:58.049 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:58.049 09:37:14 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:58.049 09:37:14 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:58.049 09:37:14 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:58.049 09:37:14 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:58.049 09:37:14 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:58.049 09:37:14 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:58.049 09:37:14 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:58.049 09:37:14 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:58.049 09:37:14 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:58.049 09:37:14 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:58.049 09:37:14 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:58.049 09:37:14 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:58.049 09:37:14 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:58.049 09:37:14 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:58.049 09:37:14 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:58.049 09:37:14 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:58.049 09:37:14 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:58.049 09:37:14 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:58.049 09:37:14 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:58.049 09:37:14 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:58.049 09:37:14 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:58.049 09:37:14 -- common/autotest_common.sh@1557 -- # continue 00:03:58.049 09:37:14 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:58.049 09:37:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:58.049 09:37:14 -- common/autotest_common.sh@10 -- # set +x 00:03:58.049 09:37:14 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:58.049 09:37:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.049 09:37:14 -- common/autotest_common.sh@10 -- # set +x 00:03:58.049 09:37:14 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.424 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.424 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.424 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.424 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.424 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.424 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.424 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.424 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:59.424 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.424 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.424 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.424 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.424 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.424 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.424 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.424 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:00.358 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.616 09:37:17 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:00.616 09:37:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:00.616 09:37:17 -- common/autotest_common.sh@10 -- # set +x 00:04:00.616 09:37:17 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:00.616 09:37:17 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:00.616 09:37:17 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:00.616 09:37:17 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:00.616 09:37:17 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:00.616 09:37:17 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:00.616 09:37:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:00.616 09:37:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:00.616 09:37:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.616 09:37:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:00.616 09:37:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:00.616 09:37:17 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:00.616 09:37:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:00.616 09:37:17 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:00.616 09:37:17 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:00.616 09:37:17 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:00.616 09:37:17 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:00.616 09:37:17 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:00.616 09:37:17 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:00.616 09:37:17 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:00.616 09:37:17 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1763816 00:04:00.616 09:37:17 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.616 09:37:17 -- common/autotest_common.sh@1598 -- # waitforlisten 1763816 00:04:00.616 09:37:17 -- common/autotest_common.sh@829 -- # '[' -z 1763816 ']' 00:04:00.616 09:37:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.616 09:37:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:00.616 09:37:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.616 09:37:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:00.616 09:37:17 -- common/autotest_common.sh@10 -- # set +x 00:04:00.616 [2024-07-15 09:37:17.279118] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:00.616 [2024-07-15 09:37:17.279211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763816 ] 00:04:00.616 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.616 [2024-07-15 09:37:17.310886] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:00.616 [2024-07-15 09:37:17.343444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.874 [2024-07-15 09:37:17.433191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.131 09:37:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:01.131 09:37:17 -- common/autotest_common.sh@862 -- # return 0 00:04:01.131 09:37:17 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:01.131 09:37:17 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:01.131 09:37:17 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:04.411 nvme0n1 00:04:04.411 09:37:20 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:04.411 [2024-07-15 09:37:20.993438] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:04.411 [2024-07-15 09:37:20.993482] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:04.411 request: 00:04:04.411 { 00:04:04.411 "nvme_ctrlr_name": "nvme0", 00:04:04.411 "password": "test", 00:04:04.411 "method": "bdev_nvme_opal_revert", 00:04:04.411 "req_id": 1 00:04:04.411 } 00:04:04.411 Got JSON-RPC error response 00:04:04.411 response: 00:04:04.411 { 00:04:04.411 "code": -32603, 00:04:04.411 "message": "Internal error" 00:04:04.411 } 00:04:04.411 09:37:21 -- common/autotest_common.sh@1604 -- # true 00:04:04.411 09:37:21 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:04.411 09:37:21 -- common/autotest_common.sh@1608 -- # killprocess 1763816 00:04:04.411 09:37:21 -- common/autotest_common.sh@948 -- # '[' -z 1763816 ']' 00:04:04.411 09:37:21 -- common/autotest_common.sh@952 -- # kill -0 1763816 00:04:04.411 09:37:21 -- common/autotest_common.sh@953 -- # uname 00:04:04.411 09:37:21 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:04.411 09:37:21 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1763816 00:04:04.411 09:37:21 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:04.411 09:37:21 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:04.411 09:37:21 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1763816' 00:04:04.411 killing process with pid 1763816 00:04:04.411 09:37:21 -- common/autotest_common.sh@967 -- # kill 1763816 00:04:04.411 09:37:21 -- common/autotest_common.sh@972 -- # wait 1763816 00:04:06.308 09:37:22 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:06.308 09:37:22 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:06.308 09:37:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:06.308 09:37:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:06.308 09:37:22 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:06.308 09:37:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:06.308 09:37:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.308 09:37:22 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:06.308 09:37:22 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:06.308 09:37:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.308 09:37:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.308 09:37:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.308 ************************************ 00:04:06.308 START TEST env 00:04:06.308 ************************************ 00:04:06.308 09:37:22 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:06.308 * Looking for test storage... 00:04:06.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:06.308 09:37:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:06.308 09:37:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.308 09:37:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.308 09:37:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.308 ************************************ 00:04:06.308 START TEST env_memory 00:04:06.308 ************************************ 00:04:06.308 09:37:22 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:06.308 00:04:06.308 00:04:06.308 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.308 http://cunit.sourceforge.net/ 00:04:06.308 00:04:06.308 00:04:06.308 Suite: memory 00:04:06.308 Test: alloc and free memory map ...[2024-07-15 09:37:22.931019] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:06.308 passed 00:04:06.308 Test: mem map translation ...[2024-07-15 09:37:22.952149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:06.308 [2024-07-15 09:37:22.952193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:06.308 [2024-07-15 09:37:22.952235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:06.309 [2024-07-15 09:37:22.952258] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:06.309 passed 00:04:06.309 Test: mem map registration ...[2024-07-15 09:37:22.995028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:06.309 [2024-07-15 09:37:22.995049] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:06.309 passed 00:04:06.309 Test: mem map adjacent registrations ...passed 00:04:06.309 00:04:06.309 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.309 suites 1 1 n/a 0 0 00:04:06.309 tests 4 4 4 0 0 00:04:06.309 asserts 152 152 152 0 n/a 00:04:06.309 00:04:06.309 Elapsed time = 0.144 seconds 00:04:06.309 00:04:06.309 real 0m0.152s 00:04:06.309 user 0m0.144s 00:04:06.309 sys 0m0.007s 00:04:06.309 09:37:23 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.309 09:37:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:06.309 ************************************ 00:04:06.309 END TEST env_memory 00:04:06.309 ************************************ 00:04:06.309 09:37:23 env -- common/autotest_common.sh@1142 -- # return 0 00:04:06.309 09:37:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:06.309 09:37:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.309 09:37:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.309 09:37:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.567 ************************************ 00:04:06.567 START TEST env_vtophys 00:04:06.567 ************************************ 00:04:06.567 09:37:23 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:06.567 EAL: lib.eal log level changed from notice to debug 00:04:06.567 EAL: Detected lcore 0 as core 0 on socket 0 00:04:06.567 EAL: Detected lcore 1 as core 1 on socket 0 00:04:06.567 EAL: Detected lcore 2 as core 2 on socket 0 00:04:06.567 EAL: Detected lcore 3 as core 3 on socket 0 00:04:06.567 EAL: Detected lcore 4 as core 4 on socket 0 00:04:06.567 EAL: Detected lcore 5 as core 5 on socket 0 00:04:06.567 EAL: Detected lcore 6 as core 8 on socket 0 00:04:06.567 EAL: Detected lcore 7 as core 9 on socket 0 00:04:06.567 EAL: Detected lcore 8 as core 10 on socket 0 00:04:06.567 EAL: Detected lcore 9 as core 11 on socket 0 00:04:06.567 EAL: Detected lcore 10 as core 12 on socket 0 00:04:06.567 EAL: Detected lcore 11 as core 13 on socket 0 00:04:06.567 EAL: Detected lcore 12 as core 0 on socket 1 00:04:06.567 EAL: Detected lcore 13 as core 1 on socket 1 00:04:06.567 EAL: Detected lcore 14 as core 2 on socket 1 00:04:06.567 EAL: Detected lcore 15 as core 3 on socket 1 00:04:06.567 EAL: Detected lcore 16 as core 4 on socket 1 00:04:06.567 EAL: Detected lcore 17 as core 5 on socket 1 00:04:06.567 EAL: Detected lcore 18 as core 8 on socket 1 00:04:06.567 EAL: Detected lcore 19 as core 9 on socket 1 00:04:06.567 EAL: Detected lcore 20 as core 10 on socket 1 00:04:06.567 EAL: Detected lcore 21 as core 11 on socket 1 00:04:06.567 EAL: Detected lcore 22 as core 12 on socket 1 00:04:06.567 EAL: Detected lcore 23 as core 13 on socket 1 00:04:06.567 EAL: Detected lcore 24 as core 0 on socket 0 00:04:06.567 EAL: Detected lcore 25 as core 1 on socket 0 00:04:06.567 EAL: Detected lcore 26 as core 2 on socket 0 00:04:06.567 EAL: Detected lcore 27 as core 3 on socket 0 00:04:06.567 EAL: Detected lcore 28 as core 4 on socket 0 00:04:06.567 EAL: Detected lcore 29 as core 5 on socket 0 00:04:06.567 EAL: Detected lcore 30 as core 8 on socket 0 00:04:06.567 EAL: Detected lcore 31 as core 9 on socket 0 00:04:06.567 EAL: Detected lcore 32 as core 10 on socket 0 00:04:06.567 EAL: Detected lcore 33 as core 11 on socket 0 00:04:06.567 EAL: Detected lcore 34 as core 12 on socket 0 00:04:06.567 EAL: Detected lcore 35 as core 13 on socket 0 00:04:06.567 EAL: Detected lcore 36 as core 0 on socket 1 00:04:06.567 EAL: Detected lcore 37 as core 1 on socket 1 00:04:06.567 EAL: Detected lcore 38 as core 2 on socket 1 00:04:06.567 EAL: Detected lcore 39 as core 3 on socket 1 00:04:06.567 EAL: Detected lcore 40 as core 4 on socket 1 00:04:06.567 EAL: Detected lcore 41 as core 5 on socket 1 00:04:06.567 EAL: Detected lcore 42 as core 8 on socket 1 00:04:06.567 EAL: Detected lcore 43 as core 9 on socket 1 00:04:06.567 EAL: Detected lcore 44 as core 10 on socket 1 00:04:06.567 EAL: Detected lcore 45 as core 11 on socket 1 00:04:06.567 EAL: Detected lcore 46 as core 12 on socket 1 00:04:06.567 EAL: Detected lcore 47 as core 13 on socket 1 00:04:06.567 EAL: Maximum logical cores by configuration: 128 00:04:06.567 EAL: Detected CPU lcores: 48 00:04:06.567 EAL: Detected NUMA nodes: 2 00:04:06.567 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:06.567 EAL: Detected shared linkage of DPDK 00:04:06.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:06.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:06.567 EAL: Registered [vdev] bus. 00:04:06.567 EAL: bus.vdev log level changed from disabled to notice 00:04:06.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:06.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:06.567 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:06.567 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:06.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:06.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:06.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:06.567 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:06.567 EAL: No shared files mode enabled, IPC will be disabled 00:04:06.567 EAL: No shared files mode enabled, IPC is disabled 00:04:06.567 EAL: Bus pci wants IOVA as 'DC' 00:04:06.567 EAL: Bus vdev wants IOVA as 'DC' 00:04:06.567 EAL: Buses did not request a specific IOVA mode. 00:04:06.567 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:06.567 EAL: Selected IOVA mode 'VA' 00:04:06.567 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.567 EAL: Probing VFIO support... 00:04:06.568 EAL: IOMMU type 1 (Type 1) is supported 00:04:06.568 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:06.568 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:06.568 EAL: VFIO support initialized 00:04:06.568 EAL: Ask a virtual area of 0x2e000 bytes 00:04:06.568 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:06.568 EAL: Setting up physically contiguous memory... 00:04:06.568 EAL: Setting maximum number of open files to 524288 00:04:06.568 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:06.568 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:06.568 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:06.568 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.568 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:06.568 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.568 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.568 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:06.568 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:06.568 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.568 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:06.568 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.568 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.568 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:06.568 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:06.568 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.568 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:06.568 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.568 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.568 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:06.568 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:06.568 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.568 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:06.568 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.568 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.568 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:06.568 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:06.568 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:06.568 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.568 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:06.568 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.568 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.568 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:06.568 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:06.568 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.568 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:06.568 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.568 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.568 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:06.568 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:06.568 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.568 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:06.568 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.568 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.568 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:06.568 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:06.568 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.568 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:06.568 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.568 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.568 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:06.568 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:06.568 EAL: Hugepages will be freed exactly as allocated. 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: TSC frequency is ~2700000 KHz 00:04:06.568 EAL: Main lcore 0 is ready (tid=7fe731a9da00;cpuset=[0]) 00:04:06.568 EAL: Trying to obtain current memory policy. 00:04:06.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.568 EAL: Restoring previous memory policy: 0 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was expanded by 2MB 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Mem event callback 'spdk:(nil)' registered 00:04:06.568 00:04:06.568 00:04:06.568 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.568 http://cunit.sourceforge.net/ 00:04:06.568 00:04:06.568 00:04:06.568 Suite: components_suite 00:04:06.568 Test: vtophys_malloc_test ...passed 00:04:06.568 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:06.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.568 EAL: Restoring previous memory policy: 4 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was expanded by 4MB 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was shrunk by 4MB 00:04:06.568 EAL: Trying to obtain current memory policy. 00:04:06.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.568 EAL: Restoring previous memory policy: 4 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was expanded by 6MB 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was shrunk by 6MB 00:04:06.568 EAL: Trying to obtain current memory policy. 00:04:06.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.568 EAL: Restoring previous memory policy: 4 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was expanded by 10MB 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was shrunk by 10MB 00:04:06.568 EAL: Trying to obtain current memory policy. 00:04:06.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.568 EAL: Restoring previous memory policy: 4 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was expanded by 18MB 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was shrunk by 18MB 00:04:06.568 EAL: Trying to obtain current memory policy. 00:04:06.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.568 EAL: Restoring previous memory policy: 4 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was expanded by 34MB 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was shrunk by 34MB 00:04:06.568 EAL: Trying to obtain current memory policy. 00:04:06.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.568 EAL: Restoring previous memory policy: 4 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was expanded by 66MB 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was shrunk by 66MB 00:04:06.568 EAL: Trying to obtain current memory policy. 00:04:06.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.568 EAL: Restoring previous memory policy: 4 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was expanded by 130MB 00:04:06.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.568 EAL: request: mp_malloc_sync 00:04:06.568 EAL: No shared files mode enabled, IPC is disabled 00:04:06.568 EAL: Heap on socket 0 was shrunk by 130MB 00:04:06.568 EAL: Trying to obtain current memory policy. 00:04:06.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.825 EAL: Restoring previous memory policy: 4 00:04:06.825 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.825 EAL: request: mp_malloc_sync 00:04:06.825 EAL: No shared files mode enabled, IPC is disabled 00:04:06.825 EAL: Heap on socket 0 was expanded by 258MB 00:04:06.825 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.825 EAL: request: mp_malloc_sync 00:04:06.825 EAL: No shared files mode enabled, IPC is disabled 00:04:06.825 EAL: Heap on socket 0 was shrunk by 258MB 00:04:06.825 EAL: Trying to obtain current memory policy. 00:04:06.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.083 EAL: Restoring previous memory policy: 4 00:04:07.083 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.083 EAL: request: mp_malloc_sync 00:04:07.083 EAL: No shared files mode enabled, IPC is disabled 00:04:07.083 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.083 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.083 EAL: request: mp_malloc_sync 00:04:07.083 EAL: No shared files mode enabled, IPC is disabled 00:04:07.083 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.083 EAL: Trying to obtain current memory policy. 00:04:07.083 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.648 EAL: Restoring previous memory policy: 4 00:04:07.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.648 EAL: request: mp_malloc_sync 00:04:07.648 EAL: No shared files mode enabled, IPC is disabled 00:04:07.648 EAL: Heap on socket 0 was expanded by 1026MB 00:04:07.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.906 EAL: request: mp_malloc_sync 00:04:07.906 EAL: No shared files mode enabled, IPC is disabled 00:04:07.906 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:07.906 passed 00:04:07.906 00:04:07.906 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.906 suites 1 1 n/a 0 0 00:04:07.906 tests 2 2 2 0 0 00:04:07.906 asserts 497 497 497 0 n/a 00:04:07.906 00:04:07.906 Elapsed time = 1.347 seconds 00:04:07.906 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.906 EAL: request: mp_malloc_sync 00:04:07.906 EAL: No shared files mode enabled, IPC is disabled 00:04:07.906 EAL: Heap on socket 0 was shrunk by 2MB 00:04:07.906 EAL: No shared files mode enabled, IPC is disabled 00:04:07.906 EAL: No shared files mode enabled, IPC is disabled 00:04:07.906 EAL: No shared files mode enabled, IPC is disabled 00:04:07.906 00:04:07.906 real 0m1.468s 00:04:07.906 user 0m0.841s 00:04:07.906 sys 0m0.592s 00:04:07.906 09:37:24 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.906 09:37:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:07.906 ************************************ 00:04:07.906 END TEST env_vtophys 00:04:07.906 ************************************ 00:04:07.906 09:37:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:07.906 09:37:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:07.906 09:37:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.906 09:37:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.906 09:37:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.906 ************************************ 00:04:07.906 START TEST env_pci 00:04:07.906 ************************************ 00:04:07.906 09:37:24 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:07.906 00:04:07.906 00:04:07.906 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.906 http://cunit.sourceforge.net/ 00:04:07.906 00:04:07.906 00:04:07.906 Suite: pci 00:04:07.906 Test: pci_hook ...[2024-07-15 09:37:24.618758] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1764708 has claimed it 00:04:07.906 EAL: Cannot find device (10000:00:01.0) 00:04:07.906 EAL: Failed to attach device on primary process 00:04:07.906 passed 00:04:07.906 00:04:07.906 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.906 suites 1 1 n/a 0 0 00:04:07.906 tests 1 1 1 0 0 00:04:07.906 asserts 25 25 25 0 n/a 00:04:07.906 00:04:07.906 Elapsed time = 0.021 seconds 00:04:07.906 00:04:07.906 real 0m0.033s 00:04:07.906 user 0m0.011s 00:04:07.906 sys 0m0.022s 00:04:07.906 09:37:24 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.906 09:37:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:07.906 ************************************ 00:04:07.906 END TEST env_pci 00:04:07.906 ************************************ 00:04:07.906 09:37:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:07.906 09:37:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:07.906 09:37:24 env -- env/env.sh@15 -- # uname 00:04:07.906 09:37:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:07.906 09:37:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:07.906 09:37:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.906 09:37:24 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:07.906 09:37:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.906 09:37:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.906 ************************************ 00:04:07.906 START TEST env_dpdk_post_init 00:04:07.906 ************************************ 00:04:07.906 09:37:24 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.165 EAL: Detected CPU lcores: 48 00:04:08.165 EAL: Detected NUMA nodes: 2 00:04:08.165 EAL: Detected shared linkage of DPDK 00:04:08.165 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.165 EAL: Selected IOVA mode 'VA' 00:04:08.165 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.165 EAL: VFIO support initialized 00:04:08.165 EAL: Using IOMMU type 1 (Type 1) 00:04:12.346 Starting DPDK initialization... 00:04:12.346 Starting SPDK post initialization... 00:04:12.346 SPDK NVMe probe 00:04:12.346 Attaching to 0000:88:00.0 00:04:12.346 Attached to 0000:88:00.0 00:04:12.346 Cleaning up... 00:04:12.346 00:04:12.346 real 0m4.388s 00:04:12.346 user 0m3.249s 00:04:12.346 sys 0m0.196s 00:04:12.346 09:37:29 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.346 09:37:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.346 ************************************ 00:04:12.346 END TEST env_dpdk_post_init 00:04:12.346 ************************************ 00:04:12.346 09:37:29 env -- common/autotest_common.sh@1142 -- # return 0 00:04:12.346 09:37:29 env -- env/env.sh@26 -- # uname 00:04:12.346 09:37:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:12.346 09:37:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.346 09:37:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.346 09:37:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.347 09:37:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.347 ************************************ 00:04:12.347 START TEST env_mem_callbacks 00:04:12.347 ************************************ 00:04:12.347 09:37:29 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.605 EAL: Detected CPU lcores: 48 00:04:12.605 EAL: Detected NUMA nodes: 2 00:04:12.605 EAL: Detected shared linkage of DPDK 00:04:12.605 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.605 EAL: Selected IOVA mode 'VA' 00:04:12.605 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.605 EAL: VFIO support initialized 00:04:12.605 00:04:12.605 00:04:12.605 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.605 http://cunit.sourceforge.net/ 00:04:12.605 00:04:12.605 00:04:12.605 Suite: memory 00:04:12.605 Test: test ... 00:04:12.605 register 0x200000200000 2097152 00:04:12.605 malloc 3145728 00:04:12.605 register 0x200000400000 4194304 00:04:12.605 buf 0x200000500000 len 3145728 PASSED 00:04:12.605 malloc 64 00:04:12.605 buf 0x2000004fff40 len 64 PASSED 00:04:12.605 malloc 4194304 00:04:12.605 register 0x200000800000 6291456 00:04:12.605 buf 0x200000a00000 len 4194304 PASSED 00:04:12.605 free 0x200000500000 3145728 00:04:12.605 free 0x2000004fff40 64 00:04:12.605 unregister 0x200000400000 4194304 PASSED 00:04:12.605 free 0x200000a00000 4194304 00:04:12.605 unregister 0x200000800000 6291456 PASSED 00:04:12.605 malloc 8388608 00:04:12.605 register 0x200000400000 10485760 00:04:12.605 buf 0x200000600000 len 8388608 PASSED 00:04:12.605 free 0x200000600000 8388608 00:04:12.605 unregister 0x200000400000 10485760 PASSED 00:04:12.605 passed 00:04:12.605 00:04:12.605 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.605 suites 1 1 n/a 0 0 00:04:12.605 tests 1 1 1 0 0 00:04:12.605 asserts 15 15 15 0 n/a 00:04:12.605 00:04:12.605 Elapsed time = 0.005 seconds 00:04:12.605 00:04:12.605 real 0m0.049s 00:04:12.605 user 0m0.014s 00:04:12.605 sys 0m0.035s 00:04:12.605 09:37:29 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.605 09:37:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:12.605 ************************************ 00:04:12.605 END TEST env_mem_callbacks 00:04:12.605 ************************************ 00:04:12.605 09:37:29 env -- common/autotest_common.sh@1142 -- # return 0 00:04:12.605 00:04:12.605 real 0m6.373s 00:04:12.605 user 0m4.382s 00:04:12.605 sys 0m1.030s 00:04:12.605 09:37:29 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.605 09:37:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.605 ************************************ 00:04:12.605 END TEST env 00:04:12.605 ************************************ 00:04:12.605 09:37:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:12.605 09:37:29 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:12.605 09:37:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.605 09:37:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.605 09:37:29 -- common/autotest_common.sh@10 -- # set +x 00:04:12.605 ************************************ 00:04:12.605 START TEST rpc 00:04:12.605 ************************************ 00:04:12.605 09:37:29 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:12.605 * Looking for test storage... 00:04:12.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.605 09:37:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1765360 00:04:12.605 09:37:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:12.605 09:37:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.605 09:37:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1765360 00:04:12.605 09:37:29 rpc -- common/autotest_common.sh@829 -- # '[' -z 1765360 ']' 00:04:12.605 09:37:29 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.605 09:37:29 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:12.605 09:37:29 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.605 09:37:29 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:12.605 09:37:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.605 [2024-07-15 09:37:29.348936] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:12.605 [2024-07-15 09:37:29.349030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765360 ] 00:04:12.605 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.605 [2024-07-15 09:37:29.380626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:12.863 [2024-07-15 09:37:29.407743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.863 [2024-07-15 09:37:29.492829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:12.863 [2024-07-15 09:37:29.492913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1765360' to capture a snapshot of events at runtime. 00:04:12.863 [2024-07-15 09:37:29.492943] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:12.863 [2024-07-15 09:37:29.492955] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:12.863 [2024-07-15 09:37:29.492965] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1765360 for offline analysis/debug. 00:04:12.863 [2024-07-15 09:37:29.493004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.121 09:37:29 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:13.121 09:37:29 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:13.121 09:37:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:13.121 09:37:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:13.121 09:37:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:13.121 09:37:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:13.121 09:37:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.121 09:37:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.121 09:37:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.121 ************************************ 00:04:13.121 START TEST rpc_integrity 00:04:13.121 ************************************ 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.121 { 00:04:13.121 "name": "Malloc0", 00:04:13.121 "aliases": [ 00:04:13.121 "795c35f8-2c0d-4b8e-a5fe-bcbcae281a39" 00:04:13.121 ], 00:04:13.121 "product_name": "Malloc disk", 00:04:13.121 "block_size": 512, 00:04:13.121 "num_blocks": 16384, 00:04:13.121 "uuid": "795c35f8-2c0d-4b8e-a5fe-bcbcae281a39", 00:04:13.121 "assigned_rate_limits": { 00:04:13.121 "rw_ios_per_sec": 0, 00:04:13.121 "rw_mbytes_per_sec": 0, 00:04:13.121 "r_mbytes_per_sec": 0, 00:04:13.121 "w_mbytes_per_sec": 0 00:04:13.121 }, 00:04:13.121 "claimed": false, 00:04:13.121 "zoned": false, 00:04:13.121 "supported_io_types": { 00:04:13.121 "read": true, 00:04:13.121 "write": true, 00:04:13.121 "unmap": true, 00:04:13.121 "flush": true, 00:04:13.121 "reset": true, 00:04:13.121 "nvme_admin": false, 00:04:13.121 "nvme_io": false, 00:04:13.121 "nvme_io_md": false, 00:04:13.121 "write_zeroes": true, 00:04:13.121 "zcopy": true, 00:04:13.121 "get_zone_info": false, 00:04:13.121 "zone_management": false, 00:04:13.121 "zone_append": false, 00:04:13.121 "compare": false, 00:04:13.121 "compare_and_write": false, 00:04:13.121 "abort": true, 00:04:13.121 "seek_hole": false, 00:04:13.121 "seek_data": false, 00:04:13.121 "copy": true, 00:04:13.121 "nvme_iov_md": false 00:04:13.121 }, 00:04:13.121 "memory_domains": [ 00:04:13.121 { 00:04:13.121 "dma_device_id": "system", 00:04:13.121 "dma_device_type": 1 00:04:13.121 }, 00:04:13.121 { 00:04:13.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.121 "dma_device_type": 2 00:04:13.121 } 00:04:13.121 ], 00:04:13.121 "driver_specific": {} 00:04:13.121 } 00:04:13.121 ]' 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.121 [2024-07-15 09:37:29.872673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:13.121 [2024-07-15 09:37:29.872716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.121 [2024-07-15 09:37:29.872739] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bce7f0 00:04:13.121 [2024-07-15 09:37:29.872754] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.121 [2024-07-15 09:37:29.874210] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.121 [2024-07-15 09:37:29.874238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.121 Passthru0 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.121 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.121 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.121 { 00:04:13.121 "name": "Malloc0", 00:04:13.121 "aliases": [ 00:04:13.121 "795c35f8-2c0d-4b8e-a5fe-bcbcae281a39" 00:04:13.121 ], 00:04:13.121 "product_name": "Malloc disk", 00:04:13.121 "block_size": 512, 00:04:13.121 "num_blocks": 16384, 00:04:13.121 "uuid": "795c35f8-2c0d-4b8e-a5fe-bcbcae281a39", 00:04:13.121 "assigned_rate_limits": { 00:04:13.121 "rw_ios_per_sec": 0, 00:04:13.121 "rw_mbytes_per_sec": 0, 00:04:13.121 "r_mbytes_per_sec": 0, 00:04:13.121 "w_mbytes_per_sec": 0 00:04:13.121 }, 00:04:13.121 "claimed": true, 00:04:13.121 "claim_type": "exclusive_write", 00:04:13.121 "zoned": false, 00:04:13.121 "supported_io_types": { 00:04:13.121 "read": true, 00:04:13.121 "write": true, 00:04:13.121 "unmap": true, 00:04:13.121 "flush": true, 00:04:13.121 "reset": true, 00:04:13.121 "nvme_admin": false, 00:04:13.121 "nvme_io": false, 00:04:13.121 "nvme_io_md": false, 00:04:13.121 "write_zeroes": true, 00:04:13.121 "zcopy": true, 00:04:13.121 "get_zone_info": false, 00:04:13.121 "zone_management": false, 00:04:13.121 "zone_append": false, 00:04:13.121 "compare": false, 00:04:13.121 "compare_and_write": false, 00:04:13.121 "abort": true, 00:04:13.121 "seek_hole": false, 00:04:13.121 "seek_data": false, 00:04:13.121 "copy": true, 00:04:13.121 "nvme_iov_md": false 00:04:13.121 }, 00:04:13.121 "memory_domains": [ 00:04:13.121 { 00:04:13.121 "dma_device_id": "system", 00:04:13.121 "dma_device_type": 1 00:04:13.121 }, 00:04:13.121 { 00:04:13.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.121 "dma_device_type": 2 00:04:13.121 } 00:04:13.121 ], 00:04:13.121 "driver_specific": {} 00:04:13.121 }, 00:04:13.121 { 00:04:13.121 "name": "Passthru0", 00:04:13.121 "aliases": [ 00:04:13.121 "25cf3923-f22e-577f-9537-8253fac70324" 00:04:13.121 ], 00:04:13.121 "product_name": "passthru", 00:04:13.121 "block_size": 512, 00:04:13.121 "num_blocks": 16384, 00:04:13.121 "uuid": "25cf3923-f22e-577f-9537-8253fac70324", 00:04:13.121 "assigned_rate_limits": { 00:04:13.121 "rw_ios_per_sec": 0, 00:04:13.121 "rw_mbytes_per_sec": 0, 00:04:13.121 "r_mbytes_per_sec": 0, 00:04:13.121 "w_mbytes_per_sec": 0 00:04:13.121 }, 00:04:13.121 "claimed": false, 00:04:13.122 "zoned": false, 00:04:13.122 "supported_io_types": { 00:04:13.122 "read": true, 00:04:13.122 "write": true, 00:04:13.122 "unmap": true, 00:04:13.122 "flush": true, 00:04:13.122 "reset": true, 00:04:13.122 "nvme_admin": false, 00:04:13.122 "nvme_io": false, 00:04:13.122 "nvme_io_md": false, 00:04:13.122 "write_zeroes": true, 00:04:13.122 "zcopy": true, 00:04:13.122 "get_zone_info": false, 00:04:13.122 "zone_management": false, 00:04:13.122 "zone_append": false, 00:04:13.122 "compare": false, 00:04:13.122 "compare_and_write": false, 00:04:13.122 "abort": true, 00:04:13.122 "seek_hole": false, 00:04:13.122 "seek_data": false, 00:04:13.122 "copy": true, 00:04:13.122 "nvme_iov_md": false 00:04:13.122 }, 00:04:13.122 "memory_domains": [ 00:04:13.122 { 00:04:13.122 "dma_device_id": "system", 00:04:13.122 "dma_device_type": 1 00:04:13.122 }, 00:04:13.122 { 00:04:13.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.122 "dma_device_type": 2 00:04:13.122 } 00:04:13.122 ], 00:04:13.122 "driver_specific": { 00:04:13.122 "passthru": { 00:04:13.122 "name": "Passthru0", 00:04:13.122 "base_bdev_name": "Malloc0" 00:04:13.122 } 00:04:13.122 } 00:04:13.122 } 00:04:13.122 ]' 00:04:13.122 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:13.380 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.380 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.380 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.380 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.380 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.380 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.380 09:37:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.380 00:04:13.380 real 0m0.230s 00:04:13.380 user 0m0.150s 00:04:13.380 sys 0m0.023s 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.380 09:37:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.380 ************************************ 00:04:13.380 END TEST rpc_integrity 00:04:13.380 ************************************ 00:04:13.380 09:37:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:13.380 09:37:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:13.380 09:37:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.380 09:37:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.380 09:37:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.380 ************************************ 00:04:13.380 START TEST rpc_plugins 00:04:13.380 ************************************ 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:13.380 { 00:04:13.380 "name": "Malloc1", 00:04:13.380 "aliases": [ 00:04:13.380 "eda21055-7afa-4656-82ad-4c8a59fce53b" 00:04:13.380 ], 00:04:13.380 "product_name": "Malloc disk", 00:04:13.380 "block_size": 4096, 00:04:13.380 "num_blocks": 256, 00:04:13.380 "uuid": "eda21055-7afa-4656-82ad-4c8a59fce53b", 00:04:13.380 "assigned_rate_limits": { 00:04:13.380 "rw_ios_per_sec": 0, 00:04:13.380 "rw_mbytes_per_sec": 0, 00:04:13.380 "r_mbytes_per_sec": 0, 00:04:13.380 "w_mbytes_per_sec": 0 00:04:13.380 }, 00:04:13.380 "claimed": false, 00:04:13.380 "zoned": false, 00:04:13.380 "supported_io_types": { 00:04:13.380 "read": true, 00:04:13.380 "write": true, 00:04:13.380 "unmap": true, 00:04:13.380 "flush": true, 00:04:13.380 "reset": true, 00:04:13.380 "nvme_admin": false, 00:04:13.380 "nvme_io": false, 00:04:13.380 "nvme_io_md": false, 00:04:13.380 "write_zeroes": true, 00:04:13.380 "zcopy": true, 00:04:13.380 "get_zone_info": false, 00:04:13.380 "zone_management": false, 00:04:13.380 "zone_append": false, 00:04:13.380 "compare": false, 00:04:13.380 "compare_and_write": false, 00:04:13.380 "abort": true, 00:04:13.380 "seek_hole": false, 00:04:13.380 "seek_data": false, 00:04:13.380 "copy": true, 00:04:13.380 "nvme_iov_md": false 00:04:13.380 }, 00:04:13.380 "memory_domains": [ 00:04:13.380 { 00:04:13.380 "dma_device_id": "system", 00:04:13.380 "dma_device_type": 1 00:04:13.380 }, 00:04:13.380 { 00:04:13.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.380 "dma_device_type": 2 00:04:13.380 } 00:04:13.380 ], 00:04:13.380 "driver_specific": {} 00:04:13.380 } 00:04:13.380 ]' 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:13.380 09:37:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:13.380 00:04:13.380 real 0m0.111s 00:04:13.380 user 0m0.073s 00:04:13.380 sys 0m0.011s 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.380 09:37:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.380 ************************************ 00:04:13.380 END TEST rpc_plugins 00:04:13.380 ************************************ 00:04:13.638 09:37:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:13.638 09:37:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:13.638 09:37:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.638 09:37:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.638 09:37:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.638 ************************************ 00:04:13.638 START TEST rpc_trace_cmd_test 00:04:13.638 ************************************ 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:13.638 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1765360", 00:04:13.638 "tpoint_group_mask": "0x8", 00:04:13.638 "iscsi_conn": { 00:04:13.638 "mask": "0x2", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "scsi": { 00:04:13.638 "mask": "0x4", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "bdev": { 00:04:13.638 "mask": "0x8", 00:04:13.638 "tpoint_mask": "0xffffffffffffffff" 00:04:13.638 }, 00:04:13.638 "nvmf_rdma": { 00:04:13.638 "mask": "0x10", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "nvmf_tcp": { 00:04:13.638 "mask": "0x20", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "ftl": { 00:04:13.638 "mask": "0x40", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "blobfs": { 00:04:13.638 "mask": "0x80", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "dsa": { 00:04:13.638 "mask": "0x200", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "thread": { 00:04:13.638 "mask": "0x400", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "nvme_pcie": { 00:04:13.638 "mask": "0x800", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "iaa": { 00:04:13.638 "mask": "0x1000", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "nvme_tcp": { 00:04:13.638 "mask": "0x2000", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "bdev_nvme": { 00:04:13.638 "mask": "0x4000", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 }, 00:04:13.638 "sock": { 00:04:13.638 "mask": "0x8000", 00:04:13.638 "tpoint_mask": "0x0" 00:04:13.638 } 00:04:13.638 }' 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:13.638 00:04:13.638 real 0m0.200s 00:04:13.638 user 0m0.176s 00:04:13.638 sys 0m0.016s 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.638 09:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:13.638 ************************************ 00:04:13.638 END TEST rpc_trace_cmd_test 00:04:13.638 ************************************ 00:04:13.638 09:37:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:13.638 09:37:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:13.638 09:37:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:13.638 09:37:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:13.638 09:37:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.638 09:37:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.896 09:37:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.896 ************************************ 00:04:13.896 START TEST rpc_daemon_integrity 00:04:13.896 ************************************ 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.896 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.896 { 00:04:13.896 "name": "Malloc2", 00:04:13.896 "aliases": [ 00:04:13.896 "15879f9c-b5db-4f27-ac24-efca419c6db5" 00:04:13.896 ], 00:04:13.896 "product_name": "Malloc disk", 00:04:13.896 "block_size": 512, 00:04:13.896 "num_blocks": 16384, 00:04:13.896 "uuid": "15879f9c-b5db-4f27-ac24-efca419c6db5", 00:04:13.896 "assigned_rate_limits": { 00:04:13.896 "rw_ios_per_sec": 0, 00:04:13.896 "rw_mbytes_per_sec": 0, 00:04:13.896 "r_mbytes_per_sec": 0, 00:04:13.896 "w_mbytes_per_sec": 0 00:04:13.896 }, 00:04:13.896 "claimed": false, 00:04:13.896 "zoned": false, 00:04:13.896 "supported_io_types": { 00:04:13.896 "read": true, 00:04:13.896 "write": true, 00:04:13.896 "unmap": true, 00:04:13.896 "flush": true, 00:04:13.896 "reset": true, 00:04:13.896 "nvme_admin": false, 00:04:13.896 "nvme_io": false, 00:04:13.896 "nvme_io_md": false, 00:04:13.896 "write_zeroes": true, 00:04:13.896 "zcopy": true, 00:04:13.896 "get_zone_info": false, 00:04:13.896 "zone_management": false, 00:04:13.896 "zone_append": false, 00:04:13.896 "compare": false, 00:04:13.896 "compare_and_write": false, 00:04:13.896 "abort": true, 00:04:13.896 "seek_hole": false, 00:04:13.896 "seek_data": false, 00:04:13.897 "copy": true, 00:04:13.897 "nvme_iov_md": false 00:04:13.897 }, 00:04:13.897 "memory_domains": [ 00:04:13.897 { 00:04:13.897 "dma_device_id": "system", 00:04:13.897 "dma_device_type": 1 00:04:13.897 }, 00:04:13.897 { 00:04:13.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.897 "dma_device_type": 2 00:04:13.897 } 00:04:13.897 ], 00:04:13.897 "driver_specific": {} 00:04:13.897 } 00:04:13.897 ]' 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.897 [2024-07-15 09:37:30.554670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:13.897 [2024-07-15 09:37:30.554712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.897 [2024-07-15 09:37:30.554734] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d72490 00:04:13.897 [2024-07-15 09:37:30.554756] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.897 [2024-07-15 09:37:30.556042] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.897 [2024-07-15 09:37:30.556067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.897 Passthru0 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.897 { 00:04:13.897 "name": "Malloc2", 00:04:13.897 "aliases": [ 00:04:13.897 "15879f9c-b5db-4f27-ac24-efca419c6db5" 00:04:13.897 ], 00:04:13.897 "product_name": "Malloc disk", 00:04:13.897 "block_size": 512, 00:04:13.897 "num_blocks": 16384, 00:04:13.897 "uuid": "15879f9c-b5db-4f27-ac24-efca419c6db5", 00:04:13.897 "assigned_rate_limits": { 00:04:13.897 "rw_ios_per_sec": 0, 00:04:13.897 "rw_mbytes_per_sec": 0, 00:04:13.897 "r_mbytes_per_sec": 0, 00:04:13.897 "w_mbytes_per_sec": 0 00:04:13.897 }, 00:04:13.897 "claimed": true, 00:04:13.897 "claim_type": "exclusive_write", 00:04:13.897 "zoned": false, 00:04:13.897 "supported_io_types": { 00:04:13.897 "read": true, 00:04:13.897 "write": true, 00:04:13.897 "unmap": true, 00:04:13.897 "flush": true, 00:04:13.897 "reset": true, 00:04:13.897 "nvme_admin": false, 00:04:13.897 "nvme_io": false, 00:04:13.897 "nvme_io_md": false, 00:04:13.897 "write_zeroes": true, 00:04:13.897 "zcopy": true, 00:04:13.897 "get_zone_info": false, 00:04:13.897 "zone_management": false, 00:04:13.897 "zone_append": false, 00:04:13.897 "compare": false, 00:04:13.897 "compare_and_write": false, 00:04:13.897 "abort": true, 00:04:13.897 "seek_hole": false, 00:04:13.897 "seek_data": false, 00:04:13.897 "copy": true, 00:04:13.897 "nvme_iov_md": false 00:04:13.897 }, 00:04:13.897 "memory_domains": [ 00:04:13.897 { 00:04:13.897 "dma_device_id": "system", 00:04:13.897 "dma_device_type": 1 00:04:13.897 }, 00:04:13.897 { 00:04:13.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.897 "dma_device_type": 2 00:04:13.897 } 00:04:13.897 ], 00:04:13.897 "driver_specific": {} 00:04:13.897 }, 00:04:13.897 { 00:04:13.897 "name": "Passthru0", 00:04:13.897 "aliases": [ 00:04:13.897 "043852ac-04fb-5ab5-b541-f28d815b1b29" 00:04:13.897 ], 00:04:13.897 "product_name": "passthru", 00:04:13.897 "block_size": 512, 00:04:13.897 "num_blocks": 16384, 00:04:13.897 "uuid": "043852ac-04fb-5ab5-b541-f28d815b1b29", 00:04:13.897 "assigned_rate_limits": { 00:04:13.897 "rw_ios_per_sec": 0, 00:04:13.897 "rw_mbytes_per_sec": 0, 00:04:13.897 "r_mbytes_per_sec": 0, 00:04:13.897 "w_mbytes_per_sec": 0 00:04:13.897 }, 00:04:13.897 "claimed": false, 00:04:13.897 "zoned": false, 00:04:13.897 "supported_io_types": { 00:04:13.897 "read": true, 00:04:13.897 "write": true, 00:04:13.897 "unmap": true, 00:04:13.897 "flush": true, 00:04:13.897 "reset": true, 00:04:13.897 "nvme_admin": false, 00:04:13.897 "nvme_io": false, 00:04:13.897 "nvme_io_md": false, 00:04:13.897 "write_zeroes": true, 00:04:13.897 "zcopy": true, 00:04:13.897 "get_zone_info": false, 00:04:13.897 "zone_management": false, 00:04:13.897 "zone_append": false, 00:04:13.897 "compare": false, 00:04:13.897 "compare_and_write": false, 00:04:13.897 "abort": true, 00:04:13.897 "seek_hole": false, 00:04:13.897 "seek_data": false, 00:04:13.897 "copy": true, 00:04:13.897 "nvme_iov_md": false 00:04:13.897 }, 00:04:13.897 "memory_domains": [ 00:04:13.897 { 00:04:13.897 "dma_device_id": "system", 00:04:13.897 "dma_device_type": 1 00:04:13.897 }, 00:04:13.897 { 00:04:13.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.897 "dma_device_type": 2 00:04:13.897 } 00:04:13.897 ], 00:04:13.897 "driver_specific": { 00:04:13.897 "passthru": { 00:04:13.897 "name": "Passthru0", 00:04:13.897 "base_bdev_name": "Malloc2" 00:04:13.897 } 00:04:13.897 } 00:04:13.897 } 00:04:13.897 ]' 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.897 09:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.897 00:04:13.897 real 0m0.235s 00:04:13.897 user 0m0.157s 00:04:13.897 sys 0m0.023s 00:04:13.898 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.898 09:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.898 ************************************ 00:04:13.898 END TEST rpc_daemon_integrity 00:04:13.898 ************************************ 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:14.155 09:37:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:14.155 09:37:30 rpc -- rpc/rpc.sh@84 -- # killprocess 1765360 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@948 -- # '[' -z 1765360 ']' 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@952 -- # kill -0 1765360 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@953 -- # uname 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1765360 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1765360' 00:04:14.155 killing process with pid 1765360 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@967 -- # kill 1765360 00:04:14.155 09:37:30 rpc -- common/autotest_common.sh@972 -- # wait 1765360 00:04:14.413 00:04:14.413 real 0m1.875s 00:04:14.413 user 0m2.373s 00:04:14.413 sys 0m0.592s 00:04:14.413 09:37:31 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.413 09:37:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.413 ************************************ 00:04:14.413 END TEST rpc 00:04:14.413 ************************************ 00:04:14.413 09:37:31 -- common/autotest_common.sh@1142 -- # return 0 00:04:14.413 09:37:31 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.413 09:37:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.413 09:37:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.413 09:37:31 -- common/autotest_common.sh@10 -- # set +x 00:04:14.413 ************************************ 00:04:14.413 START TEST skip_rpc 00:04:14.413 ************************************ 00:04:14.413 09:37:31 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.671 * Looking for test storage... 00:04:14.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.671 09:37:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.671 09:37:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:14.671 09:37:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:14.671 09:37:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.671 09:37:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.671 09:37:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.671 ************************************ 00:04:14.671 START TEST skip_rpc 00:04:14.671 ************************************ 00:04:14.671 09:37:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:14.671 09:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1765797 00:04:14.671 09:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:14.671 09:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.671 09:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:14.671 [2024-07-15 09:37:31.286577] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:14.671 [2024-07-15 09:37:31.286640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765797 ] 00:04:14.671 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.671 [2024-07-15 09:37:31.316068] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:14.671 [2024-07-15 09:37:31.343251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.671 [2024-07-15 09:37:31.430417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1765797 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1765797 ']' 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1765797 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1765797 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1765797' 00:04:19.944 killing process with pid 1765797 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1765797 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1765797 00:04:19.944 00:04:19.944 real 0m5.435s 00:04:19.944 user 0m5.115s 00:04:19.944 sys 0m0.327s 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.944 09:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.944 ************************************ 00:04:19.944 END TEST skip_rpc 00:04:19.944 ************************************ 00:04:19.944 09:37:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:19.944 09:37:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:19.944 09:37:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.944 09:37:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.944 09:37:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.944 ************************************ 00:04:19.944 START TEST skip_rpc_with_json 00:04:19.944 ************************************ 00:04:19.944 09:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:19.944 09:37:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:19.944 09:37:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1766490 00:04:19.944 09:37:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:19.944 09:37:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.944 09:37:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1766490 00:04:19.944 09:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1766490 ']' 00:04:19.944 09:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.944 09:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.944 09:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.945 09:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.945 09:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.203 [2024-07-15 09:37:36.770173] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:20.203 [2024-07-15 09:37:36.770291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766490 ] 00:04:20.203 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.203 [2024-07-15 09:37:36.802896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:20.203 [2024-07-15 09:37:36.833202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.203 [2024-07-15 09:37:36.928207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.462 [2024-07-15 09:37:37.187614] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:20.462 request: 00:04:20.462 { 00:04:20.462 "trtype": "tcp", 00:04:20.462 "method": "nvmf_get_transports", 00:04:20.462 "req_id": 1 00:04:20.462 } 00:04:20.462 Got JSON-RPC error response 00:04:20.462 response: 00:04:20.462 { 00:04:20.462 "code": -19, 00:04:20.462 "message": "No such device" 00:04:20.462 } 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.462 [2024-07-15 09:37:37.195740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.462 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.720 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.720 09:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:20.720 { 00:04:20.720 "subsystems": [ 00:04:20.720 { 00:04:20.720 "subsystem": "vfio_user_target", 00:04:20.720 "config": null 00:04:20.720 }, 00:04:20.720 { 00:04:20.720 "subsystem": "keyring", 00:04:20.720 "config": [] 00:04:20.720 }, 00:04:20.720 { 00:04:20.720 "subsystem": "iobuf", 00:04:20.720 "config": [ 00:04:20.720 { 00:04:20.720 "method": "iobuf_set_options", 00:04:20.720 "params": { 00:04:20.720 "small_pool_count": 8192, 00:04:20.720 "large_pool_count": 1024, 00:04:20.720 "small_bufsize": 8192, 00:04:20.720 "large_bufsize": 135168 00:04:20.720 } 00:04:20.720 } 00:04:20.720 ] 00:04:20.720 }, 00:04:20.720 { 00:04:20.720 "subsystem": "sock", 00:04:20.720 "config": [ 00:04:20.720 { 00:04:20.720 "method": "sock_set_default_impl", 00:04:20.720 "params": { 00:04:20.720 "impl_name": "posix" 00:04:20.720 } 00:04:20.720 }, 00:04:20.720 { 00:04:20.720 "method": "sock_impl_set_options", 00:04:20.720 "params": { 00:04:20.720 "impl_name": "ssl", 00:04:20.720 "recv_buf_size": 4096, 00:04:20.720 "send_buf_size": 4096, 00:04:20.720 "enable_recv_pipe": true, 00:04:20.720 "enable_quickack": false, 00:04:20.720 "enable_placement_id": 0, 00:04:20.720 "enable_zerocopy_send_server": true, 00:04:20.720 "enable_zerocopy_send_client": false, 00:04:20.720 "zerocopy_threshold": 0, 00:04:20.720 "tls_version": 0, 00:04:20.720 "enable_ktls": false 00:04:20.720 } 00:04:20.720 }, 00:04:20.720 { 00:04:20.720 "method": "sock_impl_set_options", 00:04:20.720 "params": { 00:04:20.720 "impl_name": "posix", 00:04:20.720 "recv_buf_size": 2097152, 00:04:20.720 "send_buf_size": 2097152, 00:04:20.720 "enable_recv_pipe": true, 00:04:20.720 "enable_quickack": false, 00:04:20.720 "enable_placement_id": 0, 00:04:20.720 "enable_zerocopy_send_server": true, 00:04:20.720 "enable_zerocopy_send_client": false, 00:04:20.720 "zerocopy_threshold": 0, 00:04:20.720 "tls_version": 0, 00:04:20.720 "enable_ktls": false 00:04:20.720 } 00:04:20.720 } 00:04:20.720 ] 00:04:20.720 }, 00:04:20.721 { 00:04:20.721 "subsystem": "vmd", 00:04:20.721 "config": [] 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "subsystem": "accel", 00:04:20.721 "config": [ 00:04:20.721 { 00:04:20.721 "method": "accel_set_options", 00:04:20.721 "params": { 00:04:20.721 "small_cache_size": 128, 00:04:20.721 "large_cache_size": 16, 00:04:20.721 "task_count": 2048, 00:04:20.721 "sequence_count": 2048, 00:04:20.721 "buf_count": 2048 00:04:20.721 } 00:04:20.721 } 00:04:20.721 ] 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "subsystem": "bdev", 00:04:20.721 "config": [ 00:04:20.721 { 00:04:20.721 "method": "bdev_set_options", 00:04:20.721 "params": { 00:04:20.721 "bdev_io_pool_size": 65535, 00:04:20.721 "bdev_io_cache_size": 256, 00:04:20.721 "bdev_auto_examine": true, 00:04:20.721 "iobuf_small_cache_size": 128, 00:04:20.721 "iobuf_large_cache_size": 16 00:04:20.721 } 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "method": "bdev_raid_set_options", 00:04:20.721 "params": { 00:04:20.721 "process_window_size_kb": 1024 00:04:20.721 } 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "method": "bdev_iscsi_set_options", 00:04:20.721 "params": { 00:04:20.721 "timeout_sec": 30 00:04:20.721 } 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "method": "bdev_nvme_set_options", 00:04:20.721 "params": { 00:04:20.721 "action_on_timeout": "none", 00:04:20.721 "timeout_us": 0, 00:04:20.721 "timeout_admin_us": 0, 00:04:20.721 "keep_alive_timeout_ms": 10000, 00:04:20.721 "arbitration_burst": 0, 00:04:20.721 "low_priority_weight": 0, 00:04:20.721 "medium_priority_weight": 0, 00:04:20.721 "high_priority_weight": 0, 00:04:20.721 "nvme_adminq_poll_period_us": 10000, 00:04:20.721 "nvme_ioq_poll_period_us": 0, 00:04:20.721 "io_queue_requests": 0, 00:04:20.721 "delay_cmd_submit": true, 00:04:20.721 "transport_retry_count": 4, 00:04:20.721 "bdev_retry_count": 3, 00:04:20.721 "transport_ack_timeout": 0, 00:04:20.721 "ctrlr_loss_timeout_sec": 0, 00:04:20.721 "reconnect_delay_sec": 0, 00:04:20.721 "fast_io_fail_timeout_sec": 0, 00:04:20.721 "disable_auto_failback": false, 00:04:20.721 "generate_uuids": false, 00:04:20.721 "transport_tos": 0, 00:04:20.721 "nvme_error_stat": false, 00:04:20.721 "rdma_srq_size": 0, 00:04:20.721 "io_path_stat": false, 00:04:20.721 "allow_accel_sequence": false, 00:04:20.721 "rdma_max_cq_size": 0, 00:04:20.721 "rdma_cm_event_timeout_ms": 0, 00:04:20.721 "dhchap_digests": [ 00:04:20.721 "sha256", 00:04:20.721 "sha384", 00:04:20.721 "sha512" 00:04:20.721 ], 00:04:20.721 "dhchap_dhgroups": [ 00:04:20.721 "null", 00:04:20.721 "ffdhe2048", 00:04:20.721 "ffdhe3072", 00:04:20.721 "ffdhe4096", 00:04:20.721 "ffdhe6144", 00:04:20.721 "ffdhe8192" 00:04:20.721 ] 00:04:20.721 } 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "method": "bdev_nvme_set_hotplug", 00:04:20.721 "params": { 00:04:20.721 "period_us": 100000, 00:04:20.721 "enable": false 00:04:20.721 } 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "method": "bdev_wait_for_examine" 00:04:20.721 } 00:04:20.721 ] 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "subsystem": "scsi", 00:04:20.721 "config": null 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "subsystem": "scheduler", 00:04:20.721 "config": [ 00:04:20.721 { 00:04:20.721 "method": "framework_set_scheduler", 00:04:20.721 "params": { 00:04:20.721 "name": "static" 00:04:20.721 } 00:04:20.721 } 00:04:20.721 ] 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "subsystem": "vhost_scsi", 00:04:20.721 "config": [] 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "subsystem": "vhost_blk", 00:04:20.721 "config": [] 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "subsystem": "ublk", 00:04:20.721 "config": [] 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "subsystem": "nbd", 00:04:20.721 "config": [] 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "subsystem": "nvmf", 00:04:20.721 "config": [ 00:04:20.721 { 00:04:20.721 "method": "nvmf_set_config", 00:04:20.721 "params": { 00:04:20.721 "discovery_filter": "match_any", 00:04:20.721 "admin_cmd_passthru": { 00:04:20.721 "identify_ctrlr": false 00:04:20.721 } 00:04:20.721 } 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "method": "nvmf_set_max_subsystems", 00:04:20.721 "params": { 00:04:20.721 "max_subsystems": 1024 00:04:20.721 } 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "method": "nvmf_set_crdt", 00:04:20.721 "params": { 00:04:20.721 "crdt1": 0, 00:04:20.721 "crdt2": 0, 00:04:20.721 "crdt3": 0 00:04:20.721 } 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "method": "nvmf_create_transport", 00:04:20.721 "params": { 00:04:20.721 "trtype": "TCP", 00:04:20.721 "max_queue_depth": 128, 00:04:20.721 "max_io_qpairs_per_ctrlr": 127, 00:04:20.721 "in_capsule_data_size": 4096, 00:04:20.721 "max_io_size": 131072, 00:04:20.721 "io_unit_size": 131072, 00:04:20.721 "max_aq_depth": 128, 00:04:20.721 "num_shared_buffers": 511, 00:04:20.721 "buf_cache_size": 4294967295, 00:04:20.721 "dif_insert_or_strip": false, 00:04:20.721 "zcopy": false, 00:04:20.721 "c2h_success": true, 00:04:20.721 "sock_priority": 0, 00:04:20.721 "abort_timeout_sec": 1, 00:04:20.721 "ack_timeout": 0, 00:04:20.721 "data_wr_pool_size": 0 00:04:20.721 } 00:04:20.721 } 00:04:20.721 ] 00:04:20.721 }, 00:04:20.721 { 00:04:20.721 "subsystem": "iscsi", 00:04:20.721 "config": [ 00:04:20.721 { 00:04:20.721 "method": "iscsi_set_options", 00:04:20.721 "params": { 00:04:20.721 "node_base": "iqn.2016-06.io.spdk", 00:04:20.721 "max_sessions": 128, 00:04:20.721 "max_connections_per_session": 2, 00:04:20.721 "max_queue_depth": 64, 00:04:20.721 "default_time2wait": 2, 00:04:20.721 "default_time2retain": 20, 00:04:20.721 "first_burst_length": 8192, 00:04:20.721 "immediate_data": true, 00:04:20.721 "allow_duplicated_isid": false, 00:04:20.721 "error_recovery_level": 0, 00:04:20.721 "nop_timeout": 60, 00:04:20.721 "nop_in_interval": 30, 00:04:20.721 "disable_chap": false, 00:04:20.721 "require_chap": false, 00:04:20.721 "mutual_chap": false, 00:04:20.721 "chap_group": 0, 00:04:20.721 "max_large_datain_per_connection": 64, 00:04:20.721 "max_r2t_per_connection": 4, 00:04:20.721 "pdu_pool_size": 36864, 00:04:20.721 "immediate_data_pool_size": 16384, 00:04:20.721 "data_out_pool_size": 2048 00:04:20.721 } 00:04:20.721 } 00:04:20.721 ] 00:04:20.721 } 00:04:20.721 ] 00:04:20.721 } 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1766490 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1766490 ']' 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1766490 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1766490 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1766490' 00:04:20.721 killing process with pid 1766490 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1766490 00:04:20.721 09:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1766490 00:04:21.288 09:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1766626 00:04:21.288 09:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:21.288 09:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:26.573 09:37:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1766626 00:04:26.573 09:37:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1766626 ']' 00:04:26.573 09:37:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1766626 00:04:26.573 09:37:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:26.573 09:37:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.573 09:37:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1766626 00:04:26.573 09:37:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.573 09:37:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.573 09:37:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1766626' 00:04:26.573 killing process with pid 1766626 00:04:26.573 09:37:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1766626 00:04:26.574 09:37:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1766626 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.574 00:04:26.574 real 0m6.495s 00:04:26.574 user 0m6.094s 00:04:26.574 sys 0m0.713s 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.574 ************************************ 00:04:26.574 END TEST skip_rpc_with_json 00:04:26.574 ************************************ 00:04:26.574 09:37:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.574 09:37:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:26.574 09:37:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.574 09:37:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.574 09:37:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.574 ************************************ 00:04:26.574 START TEST skip_rpc_with_delay 00:04:26.574 ************************************ 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.574 [2024-07-15 09:37:43.316130] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:26.574 [2024-07-15 09:37:43.316230] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:26.574 00:04:26.574 real 0m0.068s 00:04:26.574 user 0m0.042s 00:04:26.574 sys 0m0.025s 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.574 09:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:26.574 ************************************ 00:04:26.574 END TEST skip_rpc_with_delay 00:04:26.574 ************************************ 00:04:26.574 09:37:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.574 09:37:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:26.574 09:37:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:26.574 09:37:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:26.574 09:37:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.574 09:37:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.574 09:37:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.832 ************************************ 00:04:26.832 START TEST exit_on_failed_rpc_init 00:04:26.832 ************************************ 00:04:26.832 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:26.832 09:37:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1767339 00:04:26.832 09:37:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.832 09:37:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1767339 00:04:26.832 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1767339 ']' 00:04:26.832 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.832 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.832 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.832 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.832 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.832 [2024-07-15 09:37:43.434165] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:26.832 [2024-07-15 09:37:43.434244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767339 ] 00:04:26.832 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.832 [2024-07-15 09:37:43.465817] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:26.832 [2024-07-15 09:37:43.497604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.832 [2024-07-15 09:37:43.587427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.089 09:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.347 [2024-07-15 09:37:43.899187] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:27.347 [2024-07-15 09:37:43.899260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767356 ] 00:04:27.347 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.347 [2024-07-15 09:37:43.930245] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:27.347 [2024-07-15 09:37:43.963350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.347 [2024-07-15 09:37:44.057513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.347 [2024-07-15 09:37:44.057633] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:27.347 [2024-07-15 09:37:44.057666] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:27.347 [2024-07-15 09:37:44.057678] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:27.605 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:27.605 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:27.605 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:27.605 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1767339 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1767339 ']' 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1767339 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1767339 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1767339' 00:04:27.606 killing process with pid 1767339 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1767339 00:04:27.606 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1767339 00:04:27.863 00:04:27.863 real 0m1.199s 00:04:27.864 user 0m1.299s 00:04:27.864 sys 0m0.466s 00:04:27.864 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.864 09:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.864 ************************************ 00:04:27.864 END TEST exit_on_failed_rpc_init 00:04:27.864 ************************************ 00:04:27.864 09:37:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:27.864 09:37:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:27.864 00:04:27.864 real 0m13.439s 00:04:27.864 user 0m12.646s 00:04:27.864 sys 0m1.695s 00:04:27.864 09:37:44 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.864 09:37:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.864 ************************************ 00:04:27.864 END TEST skip_rpc 00:04:27.864 ************************************ 00:04:27.864 09:37:44 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.864 09:37:44 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:27.864 09:37:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.864 09:37:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.864 09:37:44 -- common/autotest_common.sh@10 -- # set +x 00:04:28.123 ************************************ 00:04:28.123 START TEST rpc_client 00:04:28.123 ************************************ 00:04:28.123 09:37:44 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.123 * Looking for test storage... 00:04:28.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:28.123 09:37:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:28.123 OK 00:04:28.123 09:37:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:28.123 00:04:28.123 real 0m0.068s 00:04:28.123 user 0m0.029s 00:04:28.123 sys 0m0.043s 00:04:28.123 09:37:44 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.123 09:37:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:28.123 ************************************ 00:04:28.123 END TEST rpc_client 00:04:28.123 ************************************ 00:04:28.123 09:37:44 -- common/autotest_common.sh@1142 -- # return 0 00:04:28.123 09:37:44 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.123 09:37:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.123 09:37:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.123 09:37:44 -- common/autotest_common.sh@10 -- # set +x 00:04:28.123 ************************************ 00:04:28.123 START TEST json_config 00:04:28.123 ************************************ 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.123 09:37:44 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.123 09:37:44 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.123 09:37:44 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.123 09:37:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.123 09:37:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.123 09:37:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.123 09:37:44 json_config -- paths/export.sh@5 -- # export PATH 00:04:28.123 09:37:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@47 -- # : 0 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:28.123 09:37:44 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:28.123 INFO: JSON configuration test init 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.123 09:37:44 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:28.123 09:37:44 json_config -- json_config/common.sh@9 -- # local app=target 00:04:28.123 09:37:44 json_config -- json_config/common.sh@10 -- # shift 00:04:28.123 09:37:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.123 09:37:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.123 09:37:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.123 09:37:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.123 09:37:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.123 09:37:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1767600 00:04:28.123 09:37:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.123 09:37:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:28.123 Waiting for target to run... 00:04:28.123 09:37:44 json_config -- json_config/common.sh@25 -- # waitforlisten 1767600 /var/tmp/spdk_tgt.sock 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@829 -- # '[' -z 1767600 ']' 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.123 09:37:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.123 [2024-07-15 09:37:44.877843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:28.123 [2024-07-15 09:37:44.877941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767600 ] 00:04:28.123 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.690 [2024-07-15 09:37:45.350758] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:28.690 [2024-07-15 09:37:45.385033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.690 [2024-07-15 09:37:45.461788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.258 09:37:45 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.258 09:37:45 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:29.258 09:37:45 json_config -- json_config/common.sh@26 -- # echo '' 00:04:29.258 00:04:29.258 09:37:45 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:29.258 09:37:45 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:29.258 09:37:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.258 09:37:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.258 09:37:45 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:29.258 09:37:45 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:29.258 09:37:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.258 09:37:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.258 09:37:45 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:29.258 09:37:45 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:29.258 09:37:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:32.537 09:37:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.537 09:37:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:32.537 09:37:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:32.537 09:37:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.537 09:37:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:32.537 09:37:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.537 09:37:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:32.537 09:37:49 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.537 09:37:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.793 MallocForNvmf0 00:04:32.793 09:37:49 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.793 09:37:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:33.050 MallocForNvmf1 00:04:33.050 09:37:49 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.050 09:37:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.308 [2024-07-15 09:37:50.039286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.308 09:37:50 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.308 09:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.565 09:37:50 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.565 09:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.821 09:37:50 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:33.821 09:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:34.078 09:37:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.078 09:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.335 [2024-07-15 09:37:51.030491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.335 09:37:51 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:34.335 09:37:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.335 09:37:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.335 09:37:51 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:34.335 09:37:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.335 09:37:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.335 09:37:51 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:34.335 09:37:51 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.335 09:37:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.592 MallocBdevForConfigChangeCheck 00:04:34.592 09:37:51 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:34.592 09:37:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.592 09:37:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.592 09:37:51 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:34.592 09:37:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.158 09:37:51 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:35.158 INFO: shutting down applications... 00:04:35.158 09:37:51 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:35.158 09:37:51 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:35.158 09:37:51 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:35.158 09:37:51 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:37.056 Calling clear_iscsi_subsystem 00:04:37.056 Calling clear_nvmf_subsystem 00:04:37.056 Calling clear_nbd_subsystem 00:04:37.056 Calling clear_ublk_subsystem 00:04:37.056 Calling clear_vhost_blk_subsystem 00:04:37.056 Calling clear_vhost_scsi_subsystem 00:04:37.056 Calling clear_bdev_subsystem 00:04:37.056 09:37:53 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:37.056 09:37:53 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:37.056 09:37:53 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:37.056 09:37:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.056 09:37:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:37.056 09:37:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:37.056 09:37:53 json_config -- json_config/json_config.sh@345 -- # break 00:04:37.056 09:37:53 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:37.056 09:37:53 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:37.056 09:37:53 json_config -- json_config/common.sh@31 -- # local app=target 00:04:37.056 09:37:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.056 09:37:53 json_config -- json_config/common.sh@35 -- # [[ -n 1767600 ]] 00:04:37.056 09:37:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1767600 00:04:37.056 09:37:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.056 09:37:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.056 09:37:53 json_config -- json_config/common.sh@41 -- # kill -0 1767600 00:04:37.056 09:37:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.624 09:37:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.624 09:37:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.624 09:37:54 json_config -- json_config/common.sh@41 -- # kill -0 1767600 00:04:37.624 09:37:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.624 09:37:54 json_config -- json_config/common.sh@43 -- # break 00:04:37.624 09:37:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.624 09:37:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.624 SPDK target shutdown done 00:04:37.624 09:37:54 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:37.624 INFO: relaunching applications... 00:04:37.625 09:37:54 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.625 09:37:54 json_config -- json_config/common.sh@9 -- # local app=target 00:04:37.625 09:37:54 json_config -- json_config/common.sh@10 -- # shift 00:04:37.625 09:37:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.625 09:37:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.625 09:37:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.625 09:37:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.625 09:37:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.625 09:37:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1768788 00:04:37.625 09:37:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.625 09:37:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.625 Waiting for target to run... 00:04:37.625 09:37:54 json_config -- json_config/common.sh@25 -- # waitforlisten 1768788 /var/tmp/spdk_tgt.sock 00:04:37.625 09:37:54 json_config -- common/autotest_common.sh@829 -- # '[' -z 1768788 ']' 00:04:37.625 09:37:54 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.625 09:37:54 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.625 09:37:54 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.625 09:37:54 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.625 09:37:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.625 [2024-07-15 09:37:54.275421] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:37.625 [2024-07-15 09:37:54.275500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768788 ] 00:04:37.625 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.191 [2024-07-15 09:37:54.768612] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:38.191 [2024-07-15 09:37:54.802943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.191 [2024-07-15 09:37:54.883061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.472 [2024-07-15 09:37:57.912897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.472 [2024-07-15 09:37:57.945372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:42.061 09:37:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.061 09:37:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:42.061 09:37:58 json_config -- json_config/common.sh@26 -- # echo '' 00:04:42.061 00:04:42.061 09:37:58 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:42.061 09:37:58 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:42.061 INFO: Checking if target configuration is the same... 00:04:42.061 09:37:58 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.061 09:37:58 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:42.061 09:37:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.061 + '[' 2 -ne 2 ']' 00:04:42.061 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:42.061 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:42.061 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.061 +++ basename /dev/fd/62 00:04:42.061 ++ mktemp /tmp/62.XXX 00:04:42.061 + tmp_file_1=/tmp/62.TYw 00:04:42.061 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.061 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.061 + tmp_file_2=/tmp/spdk_tgt_config.json.D2u 00:04:42.061 + ret=0 00:04:42.061 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:42.319 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:42.577 + diff -u /tmp/62.TYw /tmp/spdk_tgt_config.json.D2u 00:04:42.577 + echo 'INFO: JSON config files are the same' 00:04:42.577 INFO: JSON config files are the same 00:04:42.577 + rm /tmp/62.TYw /tmp/spdk_tgt_config.json.D2u 00:04:42.577 + exit 0 00:04:42.577 09:37:59 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:42.577 09:37:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:42.577 INFO: changing configuration and checking if this can be detected... 00:04:42.577 09:37:59 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.578 09:37:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.578 09:37:59 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.578 09:37:59 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:42.578 09:37:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.578 + '[' 2 -ne 2 ']' 00:04:42.578 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:42.837 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:42.837 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.837 +++ basename /dev/fd/62 00:04:42.837 ++ mktemp /tmp/62.XXX 00:04:42.837 + tmp_file_1=/tmp/62.PiF 00:04:42.837 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.837 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.837 + tmp_file_2=/tmp/spdk_tgt_config.json.BjX 00:04:42.837 + ret=0 00:04:42.837 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.096 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.096 + diff -u /tmp/62.PiF /tmp/spdk_tgt_config.json.BjX 00:04:43.096 + ret=1 00:04:43.096 + echo '=== Start of file: /tmp/62.PiF ===' 00:04:43.096 + cat /tmp/62.PiF 00:04:43.096 + echo '=== End of file: /tmp/62.PiF ===' 00:04:43.096 + echo '' 00:04:43.096 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BjX ===' 00:04:43.096 + cat /tmp/spdk_tgt_config.json.BjX 00:04:43.096 + echo '=== End of file: /tmp/spdk_tgt_config.json.BjX ===' 00:04:43.096 + echo '' 00:04:43.096 + rm /tmp/62.PiF /tmp/spdk_tgt_config.json.BjX 00:04:43.096 + exit 1 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:43.096 INFO: configuration change detected. 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@317 -- # [[ -n 1768788 ]] 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.096 09:37:59 json_config -- json_config/json_config.sh@323 -- # killprocess 1768788 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@948 -- # '[' -z 1768788 ']' 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@952 -- # kill -0 1768788 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@953 -- # uname 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1768788 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1768788' 00:04:43.096 killing process with pid 1768788 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@967 -- # kill 1768788 00:04:43.096 09:37:59 json_config -- common/autotest_common.sh@972 -- # wait 1768788 00:04:45.010 09:38:01 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:45.010 09:38:01 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:45.010 09:38:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.010 09:38:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.010 09:38:01 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:45.010 09:38:01 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:45.010 INFO: Success 00:04:45.010 00:04:45.010 real 0m16.695s 00:04:45.010 user 0m18.468s 00:04:45.010 sys 0m2.217s 00:04:45.010 09:38:01 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.010 09:38:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.010 ************************************ 00:04:45.010 END TEST json_config 00:04:45.010 ************************************ 00:04:45.010 09:38:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:45.010 09:38:01 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:45.010 09:38:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.010 09:38:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.010 09:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:45.010 ************************************ 00:04:45.010 START TEST json_config_extra_key 00:04:45.010 ************************************ 00:04:45.010 09:38:01 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:45.010 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:45.010 09:38:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.010 09:38:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.010 09:38:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.010 09:38:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.010 09:38:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.010 09:38:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.010 09:38:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:45.010 09:38:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.010 09:38:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:45.011 09:38:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:45.011 09:38:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:45.011 INFO: launching applications... 00:04:45.011 09:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1769825 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.011 Waiting for target to run... 00:04:45.011 09:38:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1769825 /var/tmp/spdk_tgt.sock 00:04:45.011 09:38:01 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1769825 ']' 00:04:45.011 09:38:01 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.011 09:38:01 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.011 09:38:01 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.011 09:38:01 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.011 09:38:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.011 [2024-07-15 09:38:01.609637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:45.011 [2024-07-15 09:38:01.609734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769825 ] 00:04:45.011 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.269 [2024-07-15 09:38:01.921042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:45.269 [2024-07-15 09:38:01.955204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.269 [2024-07-15 09:38:02.020441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.836 09:38:02 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.836 09:38:02 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:45.836 09:38:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:45.836 00:04:45.836 09:38:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:45.836 INFO: shutting down applications... 00:04:45.836 09:38:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:45.836 09:38:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:45.836 09:38:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.836 09:38:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1769825 ]] 00:04:45.836 09:38:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1769825 00:04:45.836 09:38:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.836 09:38:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.836 09:38:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1769825 00:04:45.836 09:38:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.404 09:38:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.404 09:38:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.404 09:38:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1769825 00:04:46.404 09:38:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.404 09:38:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:46.404 09:38:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.404 09:38:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.404 SPDK target shutdown done 00:04:46.404 09:38:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:46.404 Success 00:04:46.404 00:04:46.404 real 0m1.539s 00:04:46.404 user 0m1.512s 00:04:46.404 sys 0m0.428s 00:04:46.404 09:38:03 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.404 09:38:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.404 ************************************ 00:04:46.404 END TEST json_config_extra_key 00:04:46.404 ************************************ 00:04:46.404 09:38:03 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.404 09:38:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.404 09:38:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.404 09:38:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.404 09:38:03 -- common/autotest_common.sh@10 -- # set +x 00:04:46.404 ************************************ 00:04:46.404 START TEST alias_rpc 00:04:46.404 ************************************ 00:04:46.404 09:38:03 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.404 * Looking for test storage... 00:04:46.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:46.404 09:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:46.404 09:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1770023 00:04:46.404 09:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.404 09:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1770023 00:04:46.404 09:38:03 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1770023 ']' 00:04:46.404 09:38:03 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.404 09:38:03 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.404 09:38:03 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.404 09:38:03 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.404 09:38:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.663 [2024-07-15 09:38:03.191378] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:46.663 [2024-07-15 09:38:03.191459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770023 ] 00:04:46.663 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.663 [2024-07-15 09:38:03.223785] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:46.663 [2024-07-15 09:38:03.253928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.663 [2024-07-15 09:38:03.341753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.921 09:38:03 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.921 09:38:03 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.921 09:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:47.181 09:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1770023 00:04:47.181 09:38:03 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1770023 ']' 00:04:47.181 09:38:03 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1770023 00:04:47.181 09:38:03 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:47.181 09:38:03 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.181 09:38:03 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1770023 00:04:47.181 09:38:03 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.181 09:38:03 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.181 09:38:03 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1770023' 00:04:47.181 killing process with pid 1770023 00:04:47.181 09:38:03 alias_rpc -- common/autotest_common.sh@967 -- # kill 1770023 00:04:47.181 09:38:03 alias_rpc -- common/autotest_common.sh@972 -- # wait 1770023 00:04:47.748 00:04:47.748 real 0m1.193s 00:04:47.748 user 0m1.271s 00:04:47.748 sys 0m0.424s 00:04:47.748 09:38:04 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.748 09:38:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.748 ************************************ 00:04:47.748 END TEST alias_rpc 00:04:47.748 ************************************ 00:04:47.748 09:38:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.748 09:38:04 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:47.748 09:38:04 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:47.748 09:38:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.748 09:38:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.748 09:38:04 -- common/autotest_common.sh@10 -- # set +x 00:04:47.748 ************************************ 00:04:47.748 START TEST spdkcli_tcp 00:04:47.748 ************************************ 00:04:47.748 09:38:04 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:47.748 * Looking for test storage... 00:04:47.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:47.748 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:47.748 09:38:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:47.748 09:38:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:47.748 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:47.748 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:47.748 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:47.748 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:47.748 09:38:04 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.749 09:38:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.749 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1770225 00:04:47.749 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:47.749 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1770225 00:04:47.749 09:38:04 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1770225 ']' 00:04:47.749 09:38:04 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.749 09:38:04 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.749 09:38:04 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.749 09:38:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.749 09:38:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.749 [2024-07-15 09:38:04.433052] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:47.749 [2024-07-15 09:38:04.433158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770225 ] 00:04:47.749 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.749 [2024-07-15 09:38:04.465948] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:47.749 [2024-07-15 09:38:04.492385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.008 [2024-07-15 09:38:04.578371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.008 [2024-07-15 09:38:04.578375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.267 09:38:04 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.267 09:38:04 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:48.267 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1770329 00:04:48.267 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.267 09:38:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.525 [ 00:04:48.525 "bdev_malloc_delete", 00:04:48.525 "bdev_malloc_create", 00:04:48.525 "bdev_null_resize", 00:04:48.525 "bdev_null_delete", 00:04:48.525 "bdev_null_create", 00:04:48.525 "bdev_nvme_cuse_unregister", 00:04:48.525 "bdev_nvme_cuse_register", 00:04:48.525 "bdev_opal_new_user", 00:04:48.525 "bdev_opal_set_lock_state", 00:04:48.525 "bdev_opal_delete", 00:04:48.525 "bdev_opal_get_info", 00:04:48.525 "bdev_opal_create", 00:04:48.525 "bdev_nvme_opal_revert", 00:04:48.526 "bdev_nvme_opal_init", 00:04:48.526 "bdev_nvme_send_cmd", 00:04:48.526 "bdev_nvme_get_path_iostat", 00:04:48.526 "bdev_nvme_get_mdns_discovery_info", 00:04:48.526 "bdev_nvme_stop_mdns_discovery", 00:04:48.526 "bdev_nvme_start_mdns_discovery", 00:04:48.526 "bdev_nvme_set_multipath_policy", 00:04:48.526 "bdev_nvme_set_preferred_path", 00:04:48.526 "bdev_nvme_get_io_paths", 00:04:48.526 "bdev_nvme_remove_error_injection", 00:04:48.526 "bdev_nvme_add_error_injection", 00:04:48.526 "bdev_nvme_get_discovery_info", 00:04:48.526 "bdev_nvme_stop_discovery", 00:04:48.526 "bdev_nvme_start_discovery", 00:04:48.526 "bdev_nvme_get_controller_health_info", 00:04:48.526 "bdev_nvme_disable_controller", 00:04:48.526 "bdev_nvme_enable_controller", 00:04:48.526 "bdev_nvme_reset_controller", 00:04:48.526 "bdev_nvme_get_transport_statistics", 00:04:48.526 "bdev_nvme_apply_firmware", 00:04:48.526 "bdev_nvme_detach_controller", 00:04:48.526 "bdev_nvme_get_controllers", 00:04:48.526 "bdev_nvme_attach_controller", 00:04:48.526 "bdev_nvme_set_hotplug", 00:04:48.526 "bdev_nvme_set_options", 00:04:48.526 "bdev_passthru_delete", 00:04:48.526 "bdev_passthru_create", 00:04:48.526 "bdev_lvol_set_parent_bdev", 00:04:48.526 "bdev_lvol_set_parent", 00:04:48.526 "bdev_lvol_check_shallow_copy", 00:04:48.526 "bdev_lvol_start_shallow_copy", 00:04:48.526 "bdev_lvol_grow_lvstore", 00:04:48.526 "bdev_lvol_get_lvols", 00:04:48.526 "bdev_lvol_get_lvstores", 00:04:48.526 "bdev_lvol_delete", 00:04:48.526 "bdev_lvol_set_read_only", 00:04:48.526 "bdev_lvol_resize", 00:04:48.526 "bdev_lvol_decouple_parent", 00:04:48.526 "bdev_lvol_inflate", 00:04:48.526 "bdev_lvol_rename", 00:04:48.526 "bdev_lvol_clone_bdev", 00:04:48.526 "bdev_lvol_clone", 00:04:48.526 "bdev_lvol_snapshot", 00:04:48.526 "bdev_lvol_create", 00:04:48.526 "bdev_lvol_delete_lvstore", 00:04:48.526 "bdev_lvol_rename_lvstore", 00:04:48.526 "bdev_lvol_create_lvstore", 00:04:48.526 "bdev_raid_set_options", 00:04:48.526 "bdev_raid_remove_base_bdev", 00:04:48.526 "bdev_raid_add_base_bdev", 00:04:48.526 "bdev_raid_delete", 00:04:48.526 "bdev_raid_create", 00:04:48.526 "bdev_raid_get_bdevs", 00:04:48.526 "bdev_error_inject_error", 00:04:48.526 "bdev_error_delete", 00:04:48.526 "bdev_error_create", 00:04:48.526 "bdev_split_delete", 00:04:48.526 "bdev_split_create", 00:04:48.526 "bdev_delay_delete", 00:04:48.526 "bdev_delay_create", 00:04:48.526 "bdev_delay_update_latency", 00:04:48.526 "bdev_zone_block_delete", 00:04:48.526 "bdev_zone_block_create", 00:04:48.526 "blobfs_create", 00:04:48.526 "blobfs_detect", 00:04:48.526 "blobfs_set_cache_size", 00:04:48.526 "bdev_aio_delete", 00:04:48.526 "bdev_aio_rescan", 00:04:48.526 "bdev_aio_create", 00:04:48.526 "bdev_ftl_set_property", 00:04:48.526 "bdev_ftl_get_properties", 00:04:48.526 "bdev_ftl_get_stats", 00:04:48.526 "bdev_ftl_unmap", 00:04:48.526 "bdev_ftl_unload", 00:04:48.526 "bdev_ftl_delete", 00:04:48.526 "bdev_ftl_load", 00:04:48.526 "bdev_ftl_create", 00:04:48.526 "bdev_virtio_attach_controller", 00:04:48.526 "bdev_virtio_scsi_get_devices", 00:04:48.526 "bdev_virtio_detach_controller", 00:04:48.526 "bdev_virtio_blk_set_hotplug", 00:04:48.526 "bdev_iscsi_delete", 00:04:48.526 "bdev_iscsi_create", 00:04:48.526 "bdev_iscsi_set_options", 00:04:48.526 "accel_error_inject_error", 00:04:48.526 "ioat_scan_accel_module", 00:04:48.526 "dsa_scan_accel_module", 00:04:48.526 "iaa_scan_accel_module", 00:04:48.526 "vfu_virtio_create_scsi_endpoint", 00:04:48.526 "vfu_virtio_scsi_remove_target", 00:04:48.526 "vfu_virtio_scsi_add_target", 00:04:48.526 "vfu_virtio_create_blk_endpoint", 00:04:48.526 "vfu_virtio_delete_endpoint", 00:04:48.526 "keyring_file_remove_key", 00:04:48.526 "keyring_file_add_key", 00:04:48.526 "keyring_linux_set_options", 00:04:48.526 "iscsi_get_histogram", 00:04:48.526 "iscsi_enable_histogram", 00:04:48.526 "iscsi_set_options", 00:04:48.526 "iscsi_get_auth_groups", 00:04:48.526 "iscsi_auth_group_remove_secret", 00:04:48.526 "iscsi_auth_group_add_secret", 00:04:48.526 "iscsi_delete_auth_group", 00:04:48.526 "iscsi_create_auth_group", 00:04:48.526 "iscsi_set_discovery_auth", 00:04:48.526 "iscsi_get_options", 00:04:48.526 "iscsi_target_node_request_logout", 00:04:48.526 "iscsi_target_node_set_redirect", 00:04:48.526 "iscsi_target_node_set_auth", 00:04:48.526 "iscsi_target_node_add_lun", 00:04:48.526 "iscsi_get_stats", 00:04:48.526 "iscsi_get_connections", 00:04:48.526 "iscsi_portal_group_set_auth", 00:04:48.526 "iscsi_start_portal_group", 00:04:48.526 "iscsi_delete_portal_group", 00:04:48.526 "iscsi_create_portal_group", 00:04:48.526 "iscsi_get_portal_groups", 00:04:48.526 "iscsi_delete_target_node", 00:04:48.526 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.526 "iscsi_target_node_add_pg_ig_maps", 00:04:48.526 "iscsi_create_target_node", 00:04:48.526 "iscsi_get_target_nodes", 00:04:48.526 "iscsi_delete_initiator_group", 00:04:48.526 "iscsi_initiator_group_remove_initiators", 00:04:48.526 "iscsi_initiator_group_add_initiators", 00:04:48.526 "iscsi_create_initiator_group", 00:04:48.526 "iscsi_get_initiator_groups", 00:04:48.526 "nvmf_set_crdt", 00:04:48.526 "nvmf_set_config", 00:04:48.526 "nvmf_set_max_subsystems", 00:04:48.526 "nvmf_stop_mdns_prr", 00:04:48.526 "nvmf_publish_mdns_prr", 00:04:48.526 "nvmf_subsystem_get_listeners", 00:04:48.526 "nvmf_subsystem_get_qpairs", 00:04:48.526 "nvmf_subsystem_get_controllers", 00:04:48.526 "nvmf_get_stats", 00:04:48.526 "nvmf_get_transports", 00:04:48.526 "nvmf_create_transport", 00:04:48.526 "nvmf_get_targets", 00:04:48.526 "nvmf_delete_target", 00:04:48.526 "nvmf_create_target", 00:04:48.526 "nvmf_subsystem_allow_any_host", 00:04:48.526 "nvmf_subsystem_remove_host", 00:04:48.526 "nvmf_subsystem_add_host", 00:04:48.526 "nvmf_ns_remove_host", 00:04:48.526 "nvmf_ns_add_host", 00:04:48.526 "nvmf_subsystem_remove_ns", 00:04:48.526 "nvmf_subsystem_add_ns", 00:04:48.526 "nvmf_subsystem_listener_set_ana_state", 00:04:48.526 "nvmf_discovery_get_referrals", 00:04:48.526 "nvmf_discovery_remove_referral", 00:04:48.526 "nvmf_discovery_add_referral", 00:04:48.526 "nvmf_subsystem_remove_listener", 00:04:48.526 "nvmf_subsystem_add_listener", 00:04:48.526 "nvmf_delete_subsystem", 00:04:48.526 "nvmf_create_subsystem", 00:04:48.526 "nvmf_get_subsystems", 00:04:48.526 "env_dpdk_get_mem_stats", 00:04:48.526 "nbd_get_disks", 00:04:48.526 "nbd_stop_disk", 00:04:48.526 "nbd_start_disk", 00:04:48.526 "ublk_recover_disk", 00:04:48.526 "ublk_get_disks", 00:04:48.526 "ublk_stop_disk", 00:04:48.526 "ublk_start_disk", 00:04:48.526 "ublk_destroy_target", 00:04:48.526 "ublk_create_target", 00:04:48.526 "virtio_blk_create_transport", 00:04:48.526 "virtio_blk_get_transports", 00:04:48.526 "vhost_controller_set_coalescing", 00:04:48.526 "vhost_get_controllers", 00:04:48.526 "vhost_delete_controller", 00:04:48.526 "vhost_create_blk_controller", 00:04:48.526 "vhost_scsi_controller_remove_target", 00:04:48.526 "vhost_scsi_controller_add_target", 00:04:48.526 "vhost_start_scsi_controller", 00:04:48.526 "vhost_create_scsi_controller", 00:04:48.526 "thread_set_cpumask", 00:04:48.526 "framework_get_governor", 00:04:48.526 "framework_get_scheduler", 00:04:48.526 "framework_set_scheduler", 00:04:48.526 "framework_get_reactors", 00:04:48.526 "thread_get_io_channels", 00:04:48.526 "thread_get_pollers", 00:04:48.526 "thread_get_stats", 00:04:48.526 "framework_monitor_context_switch", 00:04:48.526 "spdk_kill_instance", 00:04:48.526 "log_enable_timestamps", 00:04:48.526 "log_get_flags", 00:04:48.526 "log_clear_flag", 00:04:48.526 "log_set_flag", 00:04:48.526 "log_get_level", 00:04:48.526 "log_set_level", 00:04:48.526 "log_get_print_level", 00:04:48.526 "log_set_print_level", 00:04:48.526 "framework_enable_cpumask_locks", 00:04:48.526 "framework_disable_cpumask_locks", 00:04:48.526 "framework_wait_init", 00:04:48.526 "framework_start_init", 00:04:48.526 "scsi_get_devices", 00:04:48.526 "bdev_get_histogram", 00:04:48.526 "bdev_enable_histogram", 00:04:48.526 "bdev_set_qos_limit", 00:04:48.526 "bdev_set_qd_sampling_period", 00:04:48.526 "bdev_get_bdevs", 00:04:48.526 "bdev_reset_iostat", 00:04:48.526 "bdev_get_iostat", 00:04:48.526 "bdev_examine", 00:04:48.526 "bdev_wait_for_examine", 00:04:48.526 "bdev_set_options", 00:04:48.526 "notify_get_notifications", 00:04:48.526 "notify_get_types", 00:04:48.526 "accel_get_stats", 00:04:48.526 "accel_set_options", 00:04:48.526 "accel_set_driver", 00:04:48.526 "accel_crypto_key_destroy", 00:04:48.526 "accel_crypto_keys_get", 00:04:48.526 "accel_crypto_key_create", 00:04:48.526 "accel_assign_opc", 00:04:48.526 "accel_get_module_info", 00:04:48.526 "accel_get_opc_assignments", 00:04:48.526 "vmd_rescan", 00:04:48.526 "vmd_remove_device", 00:04:48.526 "vmd_enable", 00:04:48.526 "sock_get_default_impl", 00:04:48.526 "sock_set_default_impl", 00:04:48.526 "sock_impl_set_options", 00:04:48.526 "sock_impl_get_options", 00:04:48.526 "iobuf_get_stats", 00:04:48.526 "iobuf_set_options", 00:04:48.526 "keyring_get_keys", 00:04:48.526 "framework_get_pci_devices", 00:04:48.526 "framework_get_config", 00:04:48.526 "framework_get_subsystems", 00:04:48.526 "vfu_tgt_set_base_path", 00:04:48.526 "trace_get_info", 00:04:48.526 "trace_get_tpoint_group_mask", 00:04:48.526 "trace_disable_tpoint_group", 00:04:48.526 "trace_enable_tpoint_group", 00:04:48.526 "trace_clear_tpoint_mask", 00:04:48.526 "trace_set_tpoint_mask", 00:04:48.526 "spdk_get_version", 00:04:48.526 "rpc_get_methods" 00:04:48.526 ] 00:04:48.526 09:38:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.526 09:38:05 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:48.526 09:38:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.527 09:38:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.527 09:38:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1770225 00:04:48.527 09:38:05 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1770225 ']' 00:04:48.527 09:38:05 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1770225 00:04:48.527 09:38:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:48.527 09:38:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.527 09:38:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1770225 00:04:48.527 09:38:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:48.527 09:38:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:48.527 09:38:05 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1770225' 00:04:48.527 killing process with pid 1770225 00:04:48.527 09:38:05 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1770225 00:04:48.527 09:38:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1770225 00:04:48.785 00:04:48.785 real 0m1.184s 00:04:48.785 user 0m2.106s 00:04:48.785 sys 0m0.443s 00:04:48.785 09:38:05 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.785 09:38:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.785 ************************************ 00:04:48.785 END TEST spdkcli_tcp 00:04:48.785 ************************************ 00:04:48.785 09:38:05 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.785 09:38:05 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:48.785 09:38:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.785 09:38:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.785 09:38:05 -- common/autotest_common.sh@10 -- # set +x 00:04:48.785 ************************************ 00:04:48.785 START TEST dpdk_mem_utility 00:04:48.785 ************************************ 00:04:48.785 09:38:05 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.045 * Looking for test storage... 00:04:49.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:49.045 09:38:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:49.045 09:38:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1770527 00:04:49.045 09:38:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.045 09:38:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1770527 00:04:49.045 09:38:05 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1770527 ']' 00:04:49.045 09:38:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.045 09:38:05 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.045 09:38:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.045 09:38:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.045 09:38:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.045 [2024-07-15 09:38:05.667591] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:49.046 [2024-07-15 09:38:05.667683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770527 ] 00:04:49.046 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.046 [2024-07-15 09:38:05.699760] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:49.046 [2024-07-15 09:38:05.730120] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.046 [2024-07-15 09:38:05.821598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.305 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.305 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:49.305 09:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:49.305 09:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:49.305 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.305 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.305 { 00:04:49.305 "filename": "/tmp/spdk_mem_dump.txt" 00:04:49.305 } 00:04:49.305 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.305 09:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:49.566 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:49.566 1 heaps totaling size 814.000000 MiB 00:04:49.566 size: 814.000000 MiB heap id: 0 00:04:49.566 end heaps---------- 00:04:49.566 8 mempools totaling size 598.116089 MiB 00:04:49.566 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:49.566 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:49.566 size: 84.521057 MiB name: bdev_io_1770527 00:04:49.566 size: 51.011292 MiB name: evtpool_1770527 00:04:49.566 size: 50.003479 MiB name: msgpool_1770527 00:04:49.566 size: 21.763794 MiB name: PDU_Pool 00:04:49.566 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:49.566 size: 0.026123 MiB name: Session_Pool 00:04:49.566 end mempools------- 00:04:49.566 6 memzones totaling size 4.142822 MiB 00:04:49.566 size: 1.000366 MiB name: RG_ring_0_1770527 00:04:49.566 size: 1.000366 MiB name: RG_ring_1_1770527 00:04:49.566 size: 1.000366 MiB name: RG_ring_4_1770527 00:04:49.566 size: 1.000366 MiB name: RG_ring_5_1770527 00:04:49.566 size: 0.125366 MiB name: RG_ring_2_1770527 00:04:49.566 size: 0.015991 MiB name: RG_ring_3_1770527 00:04:49.566 end memzones------- 00:04:49.566 09:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:49.566 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:49.566 list of free elements. size: 12.519348 MiB 00:04:49.566 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:49.566 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:49.566 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:49.566 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:49.566 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:49.566 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:49.566 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:49.566 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:49.566 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:49.566 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:49.566 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:49.566 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:49.566 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:49.566 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:49.566 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:49.566 list of standard malloc elements. size: 199.218079 MiB 00:04:49.566 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:49.566 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:49.566 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:49.566 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:49.566 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:49.566 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:49.566 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:49.566 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:49.566 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:49.566 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:49.566 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:49.566 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:49.566 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:49.566 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:49.566 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:49.566 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:49.566 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:49.566 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:49.566 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:49.566 list of memzone associated elements. size: 602.262573 MiB 00:04:49.566 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:49.566 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:49.566 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:49.566 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:49.566 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:49.566 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1770527_0 00:04:49.566 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:49.566 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1770527_0 00:04:49.566 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:49.566 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1770527_0 00:04:49.566 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:49.566 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:49.566 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:49.566 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:49.566 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:49.566 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1770527 00:04:49.566 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:49.566 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1770527 00:04:49.566 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:49.566 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1770527 00:04:49.566 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:49.566 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:49.566 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:49.566 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:49.566 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:49.566 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:49.566 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:49.566 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:49.566 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:49.566 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1770527 00:04:49.566 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:49.566 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1770527 00:04:49.566 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:49.566 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1770527 00:04:49.566 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:49.566 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1770527 00:04:49.566 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:49.566 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1770527 00:04:49.566 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:49.566 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:49.566 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:49.566 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:49.566 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:49.566 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:49.566 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:49.566 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1770527 00:04:49.566 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:49.566 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:49.566 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:49.566 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:49.566 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:49.566 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1770527 00:04:49.566 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:49.566 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:49.566 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:49.566 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1770527 00:04:49.566 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:49.566 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1770527 00:04:49.566 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:49.566 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:49.566 09:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:49.567 09:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1770527 00:04:49.567 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1770527 ']' 00:04:49.567 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1770527 00:04:49.567 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:49.567 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.567 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1770527 00:04:49.567 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.567 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.567 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1770527' 00:04:49.567 killing process with pid 1770527 00:04:49.567 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1770527 00:04:49.567 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1770527 00:04:50.134 00:04:50.134 real 0m1.065s 00:04:50.134 user 0m1.044s 00:04:50.134 sys 0m0.421s 00:04:50.134 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.134 09:38:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.134 ************************************ 00:04:50.134 END TEST dpdk_mem_utility 00:04:50.134 ************************************ 00:04:50.134 09:38:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.134 09:38:06 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:50.134 09:38:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.134 09:38:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.134 09:38:06 -- common/autotest_common.sh@10 -- # set +x 00:04:50.134 ************************************ 00:04:50.134 START TEST event 00:04:50.134 ************************************ 00:04:50.134 09:38:06 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:50.134 * Looking for test storage... 00:04:50.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:50.134 09:38:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:50.134 09:38:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:50.134 09:38:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.134 09:38:06 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:50.134 09:38:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.134 09:38:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.134 ************************************ 00:04:50.134 START TEST event_perf 00:04:50.134 ************************************ 00:04:50.134 09:38:06 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.134 Running I/O for 1 seconds...[2024-07-15 09:38:06.774061] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:50.134 [2024-07-15 09:38:06.774116] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770715 ] 00:04:50.134 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.134 [2024-07-15 09:38:06.807135] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:50.134 [2024-07-15 09:38:06.837748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:50.392 [2024-07-15 09:38:06.930565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.392 [2024-07-15 09:38:06.930634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.392 [2024-07-15 09:38:06.930724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.392 [2024-07-15 09:38:06.930726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.327 Running I/O for 1 seconds... 00:04:51.327 lcore 0: 235796 00:04:51.327 lcore 1: 235796 00:04:51.327 lcore 2: 235796 00:04:51.327 lcore 3: 235797 00:04:51.327 done. 00:04:51.327 00:04:51.327 real 0m1.254s 00:04:51.327 user 0m4.164s 00:04:51.327 sys 0m0.086s 00:04:51.327 09:38:08 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.327 09:38:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.327 ************************************ 00:04:51.327 END TEST event_perf 00:04:51.327 ************************************ 00:04:51.327 09:38:08 event -- common/autotest_common.sh@1142 -- # return 0 00:04:51.328 09:38:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:51.328 09:38:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:51.328 09:38:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.328 09:38:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.328 ************************************ 00:04:51.328 START TEST event_reactor 00:04:51.328 ************************************ 00:04:51.328 09:38:08 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:51.328 [2024-07-15 09:38:08.074176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:51.328 [2024-07-15 09:38:08.074270] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770873 ] 00:04:51.328 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.328 [2024-07-15 09:38:08.106539] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:51.586 [2024-07-15 09:38:08.137210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.586 [2024-07-15 09:38:08.230182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.964 test_start 00:04:52.964 oneshot 00:04:52.964 tick 100 00:04:52.964 tick 100 00:04:52.964 tick 250 00:04:52.964 tick 100 00:04:52.964 tick 100 00:04:52.964 tick 100 00:04:52.964 tick 250 00:04:52.964 tick 500 00:04:52.964 tick 100 00:04:52.964 tick 100 00:04:52.964 tick 250 00:04:52.964 tick 100 00:04:52.964 tick 100 00:04:52.964 test_end 00:04:52.964 00:04:52.964 real 0m1.249s 00:04:52.964 user 0m1.154s 00:04:52.964 sys 0m0.091s 00:04:52.964 09:38:09 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.964 09:38:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:52.964 ************************************ 00:04:52.964 END TEST event_reactor 00:04:52.964 ************************************ 00:04:52.964 09:38:09 event -- common/autotest_common.sh@1142 -- # return 0 00:04:52.964 09:38:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:52.964 09:38:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:52.964 09:38:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.964 09:38:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.964 ************************************ 00:04:52.964 START TEST event_reactor_perf 00:04:52.964 ************************************ 00:04:52.964 09:38:09 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:52.964 [2024-07-15 09:38:09.374363] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:52.964 [2024-07-15 09:38:09.374433] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771030 ] 00:04:52.964 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.964 [2024-07-15 09:38:09.407359] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:52.964 [2024-07-15 09:38:09.439187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.964 [2024-07-15 09:38:09.529481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.900 test_start 00:04:53.900 test_end 00:04:53.900 Performance: 361772 events per second 00:04:53.900 00:04:53.900 real 0m1.250s 00:04:53.900 user 0m1.157s 00:04:53.900 sys 0m0.088s 00:04:53.900 09:38:10 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.900 09:38:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.900 ************************************ 00:04:53.900 END TEST event_reactor_perf 00:04:53.900 ************************************ 00:04:53.900 09:38:10 event -- common/autotest_common.sh@1142 -- # return 0 00:04:53.900 09:38:10 event -- event/event.sh@49 -- # uname -s 00:04:53.900 09:38:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:53.900 09:38:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:53.900 09:38:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.900 09:38:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.900 09:38:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.900 ************************************ 00:04:53.900 START TEST event_scheduler 00:04:53.900 ************************************ 00:04:53.900 09:38:10 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.159 * Looking for test storage... 00:04:54.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:54.159 09:38:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:54.159 09:38:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1771209 00:04:54.159 09:38:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:54.159 09:38:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.159 09:38:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1771209 00:04:54.159 09:38:10 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1771209 ']' 00:04:54.159 09:38:10 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.159 09:38:10 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.159 09:38:10 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.159 09:38:10 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.159 09:38:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.159 [2024-07-15 09:38:10.761726] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:54.159 [2024-07-15 09:38:10.761816] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771209 ] 00:04:54.159 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.159 [2024-07-15 09:38:10.794077] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:54.159 [2024-07-15 09:38:10.820393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.159 [2024-07-15 09:38:10.907869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.159 [2024-07-15 09:38:10.907927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.159 [2024-07-15 09:38:10.907993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.159 [2024-07-15 09:38:10.907996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.418 09:38:10 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.418 09:38:10 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:54.418 09:38:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:54.418 09:38:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 [2024-07-15 09:38:10.984928] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:54.418 [2024-07-15 09:38:10.984955] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:54.418 [2024-07-15 09:38:10.984980] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:54.418 [2024-07-15 09:38:10.985001] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:54.418 [2024-07-15 09:38:10.985020] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:54.418 09:38:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:54.418 09:38:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 [2024-07-15 09:38:11.074724] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:54.418 09:38:11 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:54.418 09:38:11 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.418 09:38:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 ************************************ 00:04:54.418 START TEST scheduler_create_thread 00:04:54.418 ************************************ 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 2 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 3 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 4 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 5 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 6 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 7 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 8 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 9 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 10 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.418 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.677 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.677 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:54.677 09:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:54.677 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.677 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.936 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.936 00:04:54.936 real 0m0.591s 00:04:54.936 user 0m0.013s 00:04:54.936 sys 0m0.000s 00:04:54.936 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.936 09:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.936 ************************************ 00:04:54.936 END TEST scheduler_create_thread 00:04:54.936 ************************************ 00:04:54.936 09:38:11 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:54.936 09:38:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:54.936 09:38:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1771209 00:04:54.936 09:38:11 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1771209 ']' 00:04:54.936 09:38:11 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1771209 00:04:54.936 09:38:11 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:55.194 09:38:11 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.194 09:38:11 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1771209 00:04:55.194 09:38:11 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:55.194 09:38:11 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:55.195 09:38:11 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1771209' 00:04:55.195 killing process with pid 1771209 00:04:55.195 09:38:11 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1771209 00:04:55.195 09:38:11 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1771209 00:04:55.453 [2024-07-15 09:38:12.174778] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:55.712 00:04:55.712 real 0m1.727s 00:04:55.712 user 0m2.291s 00:04:55.712 sys 0m0.320s 00:04:55.712 09:38:12 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.712 09:38:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.712 ************************************ 00:04:55.712 END TEST event_scheduler 00:04:55.712 ************************************ 00:04:55.712 09:38:12 event -- common/autotest_common.sh@1142 -- # return 0 00:04:55.712 09:38:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:55.712 09:38:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:55.712 09:38:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.712 09:38:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.712 09:38:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.712 ************************************ 00:04:55.712 START TEST app_repeat 00:04:55.712 ************************************ 00:04:55.712 09:38:12 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1771525 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1771525' 00:04:55.712 Process app_repeat pid: 1771525 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:55.712 spdk_app_start Round 0 00:04:55.712 09:38:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1771525 /var/tmp/spdk-nbd.sock 00:04:55.712 09:38:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1771525 ']' 00:04:55.712 09:38:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.712 09:38:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.712 09:38:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.712 09:38:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.712 09:38:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.712 [2024-07-15 09:38:12.473600] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:55.712 [2024-07-15 09:38:12.473666] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771525 ] 00:04:55.970 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.970 [2024-07-15 09:38:12.506690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:55.970 [2024-07-15 09:38:12.538602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.970 [2024-07-15 09:38:12.628409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.970 [2024-07-15 09:38:12.628415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.970 09:38:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.970 09:38:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:55.970 09:38:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.228 Malloc0 00:04:56.228 09:38:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.486 Malloc1 00:04:56.486 09:38:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.486 09:38:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.744 /dev/nbd0 00:04:56.744 09:38:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.744 09:38:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.744 09:38:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:56.744 09:38:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:56.744 09:38:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:56.744 09:38:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:56.744 09:38:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:57.001 09:38:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:57.001 09:38:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:57.001 09:38:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:57.001 09:38:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.001 1+0 records in 00:04:57.001 1+0 records out 00:04:57.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023616 s, 17.3 MB/s 00:04:57.001 09:38:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.001 09:38:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:57.001 09:38:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.001 09:38:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:57.001 09:38:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:57.001 09:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.001 09:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.001 09:38:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.001 /dev/nbd1 00:04:57.259 09:38:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.259 09:38:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.259 09:38:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:57.259 09:38:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:57.259 09:38:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:57.259 09:38:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:57.259 09:38:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:57.259 09:38:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:57.259 09:38:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:57.259 09:38:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:57.259 09:38:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.259 1+0 records in 00:04:57.260 1+0 records out 00:04:57.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223263 s, 18.3 MB/s 00:04:57.260 09:38:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.260 09:38:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:57.260 09:38:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.260 09:38:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:57.260 09:38:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:57.260 09:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.260 09:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.260 09:38:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.260 09:38:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.260 09:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.517 { 00:04:57.517 "nbd_device": "/dev/nbd0", 00:04:57.517 "bdev_name": "Malloc0" 00:04:57.517 }, 00:04:57.517 { 00:04:57.517 "nbd_device": "/dev/nbd1", 00:04:57.517 "bdev_name": "Malloc1" 00:04:57.517 } 00:04:57.517 ]' 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.517 { 00:04:57.517 "nbd_device": "/dev/nbd0", 00:04:57.517 "bdev_name": "Malloc0" 00:04:57.517 }, 00:04:57.517 { 00:04:57.517 "nbd_device": "/dev/nbd1", 00:04:57.517 "bdev_name": "Malloc1" 00:04:57.517 } 00:04:57.517 ]' 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.517 /dev/nbd1' 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.517 /dev/nbd1' 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.517 256+0 records in 00:04:57.517 256+0 records out 00:04:57.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499899 s, 210 MB/s 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.517 256+0 records in 00:04:57.517 256+0 records out 00:04:57.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235851 s, 44.5 MB/s 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.517 256+0 records in 00:04:57.517 256+0 records out 00:04:57.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224583 s, 46.7 MB/s 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.517 09:38:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.518 09:38:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.518 09:38:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.518 09:38:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.518 09:38:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.781 09:38:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.781 09:38:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.781 09:38:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.781 09:38:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.781 09:38:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.781 09:38:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.781 09:38:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.781 09:38:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.781 09:38:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.781 09:38:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.091 09:38:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.348 09:38:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.348 09:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.348 09:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.348 09:38:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.348 09:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.348 09:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.348 09:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.348 09:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.348 09:38:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.348 09:38:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.348 09:38:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.348 09:38:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.348 09:38:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.605 09:38:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:58.863 [2024-07-15 09:38:15.511487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.863 [2024-07-15 09:38:15.600183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.863 [2024-07-15 09:38:15.600187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.123 [2024-07-15 09:38:15.660266] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.123 [2024-07-15 09:38:15.660336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.658 09:38:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.658 09:38:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:01.658 spdk_app_start Round 1 00:05:01.658 09:38:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1771525 /var/tmp/spdk-nbd.sock 00:05:01.658 09:38:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1771525 ']' 00:05:01.658 09:38:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.658 09:38:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.658 09:38:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.658 09:38:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.658 09:38:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.916 09:38:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.916 09:38:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:01.916 09:38:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.174 Malloc0 00:05:02.174 09:38:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.432 Malloc1 00:05:02.432 09:38:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.432 09:38:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.432 09:38:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.432 09:38:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.432 09:38:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.432 09:38:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.432 09:38:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.432 09:38:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.432 09:38:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.432 09:38:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.432 09:38:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.433 09:38:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.433 09:38:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.433 09:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.433 09:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.433 09:38:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.691 /dev/nbd0 00:05:02.691 09:38:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.691 09:38:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.691 1+0 records in 00:05:02.691 1+0 records out 00:05:02.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00014557 s, 28.1 MB/s 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:02.691 09:38:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:02.691 09:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.691 09:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.691 09:38:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.950 /dev/nbd1 00:05:02.950 09:38:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.950 09:38:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.950 1+0 records in 00:05:02.950 1+0 records out 00:05:02.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210961 s, 19.4 MB/s 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:02.950 09:38:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:02.950 09:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.950 09:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.950 09:38:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.950 09:38:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.950 09:38:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.209 { 00:05:03.209 "nbd_device": "/dev/nbd0", 00:05:03.209 "bdev_name": "Malloc0" 00:05:03.209 }, 00:05:03.209 { 00:05:03.209 "nbd_device": "/dev/nbd1", 00:05:03.209 "bdev_name": "Malloc1" 00:05:03.209 } 00:05:03.209 ]' 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.209 { 00:05:03.209 "nbd_device": "/dev/nbd0", 00:05:03.209 "bdev_name": "Malloc0" 00:05:03.209 }, 00:05:03.209 { 00:05:03.209 "nbd_device": "/dev/nbd1", 00:05:03.209 "bdev_name": "Malloc1" 00:05:03.209 } 00:05:03.209 ]' 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.209 /dev/nbd1' 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.209 /dev/nbd1' 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.209 256+0 records in 00:05:03.209 256+0 records out 00:05:03.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380369 s, 276 MB/s 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.209 256+0 records in 00:05:03.209 256+0 records out 00:05:03.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237706 s, 44.1 MB/s 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.209 256+0 records in 00:05:03.209 256+0 records out 00:05:03.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252009 s, 41.6 MB/s 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.209 09:38:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.468 09:38:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.468 09:38:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.468 09:38:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.468 09:38:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.468 09:38:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.468 09:38:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.468 09:38:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.468 09:38:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.468 09:38:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.468 09:38:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.726 09:38:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.984 09:38:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.984 09:38:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.984 09:38:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.244 09:38:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.244 09:38:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.244 09:38:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.244 09:38:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.244 09:38:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.244 09:38:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.244 09:38:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.244 09:38:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.244 09:38:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.244 09:38:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.503 09:38:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.503 [2024-07-15 09:38:21.283407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.763 [2024-07-15 09:38:21.374343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.763 [2024-07-15 09:38:21.374347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.763 [2024-07-15 09:38:21.436152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.763 [2024-07-15 09:38:21.436242] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.298 09:38:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.298 09:38:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:07.298 spdk_app_start Round 2 00:05:07.298 09:38:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1771525 /var/tmp/spdk-nbd.sock 00:05:07.298 09:38:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1771525 ']' 00:05:07.298 09:38:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.298 09:38:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.298 09:38:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.298 09:38:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.298 09:38:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.556 09:38:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.556 09:38:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:07.556 09:38:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.814 Malloc0 00:05:07.814 09:38:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.072 Malloc1 00:05:08.072 09:38:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.072 09:38:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.073 09:38:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.073 09:38:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.073 09:38:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.073 09:38:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.331 /dev/nbd0 00:05:08.332 09:38:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.332 09:38:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.332 09:38:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:08.332 09:38:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:08.332 09:38:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:08.332 09:38:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:08.332 09:38:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:08.332 09:38:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:08.332 09:38:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:08.332 09:38:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:08.332 09:38:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.591 1+0 records in 00:05:08.591 1+0 records out 00:05:08.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196849 s, 20.8 MB/s 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:08.591 09:38:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.591 09:38:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.591 09:38:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.591 /dev/nbd1 00:05:08.591 09:38:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.591 09:38:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:08.591 09:38:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.851 1+0 records in 00:05:08.851 1+0 records out 00:05:08.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199993 s, 20.5 MB/s 00:05:08.851 09:38:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.851 09:38:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:08.851 09:38:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.851 09:38:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:08.851 09:38:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:08.851 09:38:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.851 09:38:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.851 09:38:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.851 09:38:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.851 09:38:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.851 09:38:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.851 { 00:05:08.851 "nbd_device": "/dev/nbd0", 00:05:08.851 "bdev_name": "Malloc0" 00:05:08.851 }, 00:05:08.851 { 00:05:08.851 "nbd_device": "/dev/nbd1", 00:05:08.851 "bdev_name": "Malloc1" 00:05:08.851 } 00:05:08.851 ]' 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.110 { 00:05:09.110 "nbd_device": "/dev/nbd0", 00:05:09.110 "bdev_name": "Malloc0" 00:05:09.110 }, 00:05:09.110 { 00:05:09.110 "nbd_device": "/dev/nbd1", 00:05:09.110 "bdev_name": "Malloc1" 00:05:09.110 } 00:05:09.110 ]' 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.110 /dev/nbd1' 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.110 /dev/nbd1' 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.110 256+0 records in 00:05:09.110 256+0 records out 00:05:09.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00381304 s, 275 MB/s 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.110 256+0 records in 00:05:09.110 256+0 records out 00:05:09.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234603 s, 44.7 MB/s 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.110 256+0 records in 00:05:09.110 256+0 records out 00:05:09.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228662 s, 45.9 MB/s 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.110 09:38:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.368 09:38:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.368 09:38:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.368 09:38:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.368 09:38:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.368 09:38:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.368 09:38:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.368 09:38:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.368 09:38:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.368 09:38:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.368 09:38:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.625 09:38:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.626 09:38:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.626 09:38:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.626 09:38:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.626 09:38:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.626 09:38:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.626 09:38:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.626 09:38:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.626 09:38:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.626 09:38:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.626 09:38:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.883 09:38:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.883 09:38:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.883 09:38:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.883 09:38:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.883 09:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.883 09:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.883 09:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.883 09:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.883 09:38:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.883 09:38:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.884 09:38:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.884 09:38:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.884 09:38:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.142 09:38:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.402 [2024-07-15 09:38:27.079446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.402 [2024-07-15 09:38:27.167980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.402 [2024-07-15 09:38:27.167984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.662 [2024-07-15 09:38:27.229994] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.662 [2024-07-15 09:38:27.230069] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.198 09:38:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1771525 /var/tmp/spdk-nbd.sock 00:05:13.198 09:38:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1771525 ']' 00:05:13.198 09:38:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.198 09:38:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.198 09:38:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.199 09:38:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.199 09:38:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.457 09:38:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.457 09:38:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:13.457 09:38:30 event.app_repeat -- event/event.sh@39 -- # killprocess 1771525 00:05:13.457 09:38:30 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1771525 ']' 00:05:13.457 09:38:30 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1771525 00:05:13.457 09:38:30 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:13.457 09:38:30 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.458 09:38:30 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1771525 00:05:13.458 09:38:30 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.458 09:38:30 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.458 09:38:30 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1771525' 00:05:13.458 killing process with pid 1771525 00:05:13.458 09:38:30 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1771525 00:05:13.458 09:38:30 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1771525 00:05:13.715 spdk_app_start is called in Round 0. 00:05:13.715 Shutdown signal received, stop current app iteration 00:05:13.715 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:05:13.715 spdk_app_start is called in Round 1. 00:05:13.715 Shutdown signal received, stop current app iteration 00:05:13.715 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:05:13.715 spdk_app_start is called in Round 2. 00:05:13.715 Shutdown signal received, stop current app iteration 00:05:13.715 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:05:13.715 spdk_app_start is called in Round 3. 00:05:13.715 Shutdown signal received, stop current app iteration 00:05:13.715 09:38:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:13.715 09:38:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:13.715 00:05:13.715 real 0m17.914s 00:05:13.715 user 0m39.014s 00:05:13.715 sys 0m3.171s 00:05:13.715 09:38:30 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.715 09:38:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.715 ************************************ 00:05:13.715 END TEST app_repeat 00:05:13.715 ************************************ 00:05:13.715 09:38:30 event -- common/autotest_common.sh@1142 -- # return 0 00:05:13.715 09:38:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:13.715 09:38:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:13.715 09:38:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.715 09:38:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.715 09:38:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.715 ************************************ 00:05:13.715 START TEST cpu_locks 00:05:13.715 ************************************ 00:05:13.715 09:38:30 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:13.715 * Looking for test storage... 00:05:13.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:13.715 09:38:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:13.715 09:38:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:13.715 09:38:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:13.715 09:38:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:13.715 09:38:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.715 09:38:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.715 09:38:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.715 ************************************ 00:05:13.715 START TEST default_locks 00:05:13.715 ************************************ 00:05:13.715 09:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:13.715 09:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1773874 00:05:13.715 09:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.715 09:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1773874 00:05:13.715 09:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1773874 ']' 00:05:13.715 09:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.715 09:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.715 09:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.715 09:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.715 09:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.974 [2024-07-15 09:38:30.539020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:13.974 [2024-07-15 09:38:30.539097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773874 ] 00:05:13.974 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.974 [2024-07-15 09:38:30.571262] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:13.974 [2024-07-15 09:38:30.599310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.974 [2024-07-15 09:38:30.688384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.233 09:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.233 09:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:14.233 09:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1773874 00:05:14.233 09:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1773874 00:05:14.233 09:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.536 lslocks: write error 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1773874 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1773874 ']' 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1773874 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1773874 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1773874' 00:05:14.536 killing process with pid 1773874 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1773874 00:05:14.536 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1773874 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1773874 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1773874 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1773874 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1773874 ']' 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1773874) - No such process 00:05:15.105 ERROR: process (pid: 1773874) is no longer running 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:15.105 00:05:15.105 real 0m1.203s 00:05:15.105 user 0m1.140s 00:05:15.105 sys 0m0.532s 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.105 09:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.105 ************************************ 00:05:15.105 END TEST default_locks 00:05:15.105 ************************************ 00:05:15.105 09:38:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:15.106 09:38:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:15.106 09:38:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.106 09:38:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.106 09:38:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.106 ************************************ 00:05:15.106 START TEST default_locks_via_rpc 00:05:15.106 ************************************ 00:05:15.106 09:38:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:15.106 09:38:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1774040 00:05:15.106 09:38:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.106 09:38:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1774040 00:05:15.106 09:38:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1774040 ']' 00:05:15.106 09:38:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.106 09:38:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.106 09:38:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.106 09:38:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.106 09:38:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.106 [2024-07-15 09:38:31.791595] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:15.106 [2024-07-15 09:38:31.791696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774040 ] 00:05:15.106 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.106 [2024-07-15 09:38:31.824525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:15.106 [2024-07-15 09:38:31.850211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.365 [2024-07-15 09:38:31.939238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1774040 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1774040 00:05:15.625 09:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1774040 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1774040 ']' 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1774040 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774040 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774040' 00:05:15.883 killing process with pid 1774040 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1774040 00:05:15.883 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1774040 00:05:16.142 00:05:16.142 real 0m1.171s 00:05:16.142 user 0m1.095s 00:05:16.142 sys 0m0.544s 00:05:16.142 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.142 09:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.142 ************************************ 00:05:16.142 END TEST default_locks_via_rpc 00:05:16.142 ************************************ 00:05:16.402 09:38:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:16.402 09:38:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:16.402 09:38:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.402 09:38:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.402 09:38:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.402 ************************************ 00:05:16.402 START TEST non_locking_app_on_locked_coremask 00:05:16.402 ************************************ 00:05:16.402 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:16.402 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1774203 00:05:16.402 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.402 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1774203 /var/tmp/spdk.sock 00:05:16.402 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1774203 ']' 00:05:16.402 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.402 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.402 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.402 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.402 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.402 [2024-07-15 09:38:33.010846] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:16.402 [2024-07-15 09:38:33.010981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774203 ] 00:05:16.402 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.402 [2024-07-15 09:38:33.043532] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:16.402 [2024-07-15 09:38:33.069794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.402 [2024-07-15 09:38:33.157781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.670 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.670 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:16.670 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1774216 00:05:16.671 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:16.671 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1774216 /var/tmp/spdk2.sock 00:05:16.671 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1774216 ']' 00:05:16.671 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.671 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.671 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.671 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.671 09:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.930 [2024-07-15 09:38:33.460230] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:16.930 [2024-07-15 09:38:33.460318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774216 ] 00:05:16.930 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.930 [2024-07-15 09:38:33.495599] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:16.930 [2024-07-15 09:38:33.552197] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.930 [2024-07-15 09:38:33.552228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.189 [2024-07-15 09:38:33.737915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.756 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.756 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:17.756 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1774203 00:05:17.756 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1774203 00:05:17.756 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.015 lslocks: write error 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1774203 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1774203 ']' 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1774203 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774203 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774203' 00:05:18.015 killing process with pid 1774203 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1774203 00:05:18.015 09:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1774203 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1774216 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1774216 ']' 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1774216 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774216 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774216' 00:05:18.952 killing process with pid 1774216 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1774216 00:05:18.952 09:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1774216 00:05:19.520 00:05:19.520 real 0m3.072s 00:05:19.520 user 0m3.222s 00:05:19.520 sys 0m1.009s 00:05:19.520 09:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.520 09:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.520 ************************************ 00:05:19.520 END TEST non_locking_app_on_locked_coremask 00:05:19.520 ************************************ 00:05:19.520 09:38:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:19.520 09:38:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:19.520 09:38:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.520 09:38:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.520 09:38:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.520 ************************************ 00:05:19.520 START TEST locking_app_on_unlocked_coremask 00:05:19.520 ************************************ 00:05:19.520 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:19.520 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1774589 00:05:19.520 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:19.520 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1774589 /var/tmp/spdk.sock 00:05:19.520 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1774589 ']' 00:05:19.520 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.520 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.520 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.520 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.520 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.520 [2024-07-15 09:38:36.129040] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:19.520 [2024-07-15 09:38:36.129139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774589 ] 00:05:19.520 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.520 [2024-07-15 09:38:36.162330] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:19.520 [2024-07-15 09:38:36.188438] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.520 [2024-07-15 09:38:36.188461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.520 [2024-07-15 09:38:36.274662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1774646 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1774646 /var/tmp/spdk2.sock 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1774646 ']' 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.778 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.036 [2024-07-15 09:38:36.572916] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:20.036 [2024-07-15 09:38:36.573008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774646 ] 00:05:20.036 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.036 [2024-07-15 09:38:36.606593] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:20.036 [2024-07-15 09:38:36.664881] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.295 [2024-07-15 09:38:36.848284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.862 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.862 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:20.862 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1774646 00:05:20.862 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1774646 00:05:20.862 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.429 lslocks: write error 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1774589 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1774589 ']' 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1774589 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774589 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774589' 00:05:21.429 killing process with pid 1774589 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1774589 00:05:21.429 09:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1774589 00:05:22.369 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1774646 00:05:22.369 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1774646 ']' 00:05:22.369 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1774646 00:05:22.369 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:22.369 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.369 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774646 00:05:22.370 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.370 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.370 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774646' 00:05:22.370 killing process with pid 1774646 00:05:22.370 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1774646 00:05:22.370 09:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1774646 00:05:22.628 00:05:22.628 real 0m3.153s 00:05:22.628 user 0m3.279s 00:05:22.628 sys 0m1.063s 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.628 ************************************ 00:05:22.628 END TEST locking_app_on_unlocked_coremask 00:05:22.628 ************************************ 00:05:22.628 09:38:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:22.628 09:38:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:22.628 09:38:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.628 09:38:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.628 09:38:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.628 ************************************ 00:05:22.628 START TEST locking_app_on_locked_coremask 00:05:22.628 ************************************ 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1774951 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1774951 /var/tmp/spdk.sock 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1774951 ']' 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.628 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.628 [2024-07-15 09:38:39.333920] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:22.628 [2024-07-15 09:38:39.334013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774951 ] 00:05:22.628 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.628 [2024-07-15 09:38:39.364960] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:22.628 [2024-07-15 09:38:39.396739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.886 [2024-07-15 09:38:39.490065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1775079 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1775079 /var/tmp/spdk2.sock 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1775079 /var/tmp/spdk2.sock 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1775079 /var/tmp/spdk2.sock 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1775079 ']' 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.146 09:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.146 [2024-07-15 09:38:39.790890] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:23.146 [2024-07-15 09:38:39.790975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775079 ] 00:05:23.146 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.146 [2024-07-15 09:38:39.824731] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.146 [2024-07-15 09:38:39.875646] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1774951 has claimed it. 00:05:23.146 [2024-07-15 09:38:39.875699] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1775079) - No such process 00:05:23.714 ERROR: process (pid: 1775079) is no longer running 00:05:23.714 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.714 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:23.714 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:23.714 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.714 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.714 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.714 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1774951 00:05:23.714 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1774951 00:05:23.714 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.974 lslocks: write error 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1774951 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1774951 ']' 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1774951 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774951 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774951' 00:05:23.974 killing process with pid 1774951 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1774951 00:05:23.974 09:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1774951 00:05:24.543 00:05:24.543 real 0m1.879s 00:05:24.543 user 0m2.037s 00:05:24.543 sys 0m0.622s 00:05:24.543 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.543 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.543 ************************************ 00:05:24.543 END TEST locking_app_on_locked_coremask 00:05:24.543 ************************************ 00:05:24.543 09:38:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.543 09:38:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:24.544 09:38:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.544 09:38:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.544 09:38:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.544 ************************************ 00:05:24.544 START TEST locking_overlapped_coremask 00:05:24.544 ************************************ 00:05:24.544 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:24.544 09:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1775244 00:05:24.544 09:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:24.544 09:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1775244 /var/tmp/spdk.sock 00:05:24.544 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1775244 ']' 00:05:24.544 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.544 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.544 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.544 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.544 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.544 [2024-07-15 09:38:41.258056] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:24.544 [2024-07-15 09:38:41.258159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775244 ] 00:05:24.544 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.544 [2024-07-15 09:38:41.290890] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:24.544 [2024-07-15 09:38:41.317573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.803 [2024-07-15 09:38:41.405145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.803 [2024-07-15 09:38:41.405200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.803 [2024-07-15 09:38:41.405203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1775264 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1775264 /var/tmp/spdk2.sock 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1775264 /var/tmp/spdk2.sock 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1775264 /var/tmp/spdk2.sock 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1775264 ']' 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.062 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.063 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.063 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.063 09:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.063 [2024-07-15 09:38:41.702677] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:25.063 [2024-07-15 09:38:41.702763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775264 ] 00:05:25.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.063 [2024-07-15 09:38:41.741968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.063 [2024-07-15 09:38:41.796845] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1775244 has claimed it. 00:05:25.063 [2024-07-15 09:38:41.796908] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:25.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1775264) - No such process 00:05:25.657 ERROR: process (pid: 1775264) is no longer running 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1775244 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1775244 ']' 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1775244 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1775244 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1775244' 00:05:25.657 killing process with pid 1775244 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1775244 00:05:25.657 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1775244 00:05:26.226 00:05:26.226 real 0m1.613s 00:05:26.226 user 0m4.366s 00:05:26.226 sys 0m0.481s 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.226 ************************************ 00:05:26.226 END TEST locking_overlapped_coremask 00:05:26.226 ************************************ 00:05:26.226 09:38:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:26.226 09:38:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:26.226 09:38:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.226 09:38:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.226 09:38:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.226 ************************************ 00:05:26.226 START TEST locking_overlapped_coremask_via_rpc 00:05:26.226 ************************************ 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1775516 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1775516 /var/tmp/spdk.sock 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1775516 ']' 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.226 09:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.226 [2024-07-15 09:38:42.919633] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:26.226 [2024-07-15 09:38:42.919737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775516 ] 00:05:26.226 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.226 [2024-07-15 09:38:42.952582] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.226 [2024-07-15 09:38:42.978815] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.226 [2024-07-15 09:38:42.978839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.485 [2024-07-15 09:38:43.068371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.485 [2024-07-15 09:38:43.068437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.485 [2024-07-15 09:38:43.068440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1775547 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1775547 /var/tmp/spdk2.sock 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1775547 ']' 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.742 09:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.742 [2024-07-15 09:38:43.361505] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:26.742 [2024-07-15 09:38:43.361585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775547 ] 00:05:26.742 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.742 [2024-07-15 09:38:43.395943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.742 [2024-07-15 09:38:43.450619] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.742 [2024-07-15 09:38:43.450645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.999 [2024-07-15 09:38:43.626498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.999 [2024-07-15 09:38:43.626560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:26.999 [2024-07-15 09:38:43.626562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.563 [2024-07-15 09:38:44.302968] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1775516 has claimed it. 00:05:27.563 request: 00:05:27.563 { 00:05:27.563 "method": "framework_enable_cpumask_locks", 00:05:27.563 "req_id": 1 00:05:27.563 } 00:05:27.563 Got JSON-RPC error response 00:05:27.563 response: 00:05:27.563 { 00:05:27.563 "code": -32603, 00:05:27.563 "message": "Failed to claim CPU core: 2" 00:05:27.563 } 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.563 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.564 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1775516 /var/tmp/spdk.sock 00:05:27.564 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1775516 ']' 00:05:27.564 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.564 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.564 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.564 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.564 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.821 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.821 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.821 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1775547 /var/tmp/spdk2.sock 00:05:27.821 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1775547 ']' 00:05:27.821 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.821 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.821 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.821 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.821 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.077 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.077 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:28.077 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:28.077 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:28.077 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:28.077 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:28.077 00:05:28.077 real 0m1.940s 00:05:28.077 user 0m1.017s 00:05:28.077 sys 0m0.183s 00:05:28.078 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.078 09:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.078 ************************************ 00:05:28.078 END TEST locking_overlapped_coremask_via_rpc 00:05:28.078 ************************************ 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:28.078 09:38:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:28.078 09:38:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1775516 ]] 00:05:28.078 09:38:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1775516 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1775516 ']' 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1775516 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1775516 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1775516' 00:05:28.078 killing process with pid 1775516 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1775516 00:05:28.078 09:38:44 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1775516 00:05:28.642 09:38:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1775547 ]] 00:05:28.642 09:38:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1775547 00:05:28.642 09:38:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1775547 ']' 00:05:28.642 09:38:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1775547 00:05:28.642 09:38:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:28.642 09:38:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.642 09:38:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1775547 00:05:28.642 09:38:45 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:28.642 09:38:45 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:28.642 09:38:45 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1775547' 00:05:28.642 killing process with pid 1775547 00:05:28.642 09:38:45 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1775547 00:05:28.642 09:38:45 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1775547 00:05:28.901 09:38:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.901 09:38:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:28.901 09:38:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1775516 ]] 00:05:28.901 09:38:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1775516 00:05:28.901 09:38:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1775516 ']' 00:05:28.901 09:38:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1775516 00:05:28.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1775516) - No such process 00:05:28.901 09:38:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1775516 is not found' 00:05:28.901 Process with pid 1775516 is not found 00:05:28.901 09:38:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1775547 ]] 00:05:28.901 09:38:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1775547 00:05:28.901 09:38:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1775547 ']' 00:05:28.901 09:38:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1775547 00:05:28.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1775547) - No such process 00:05:28.901 09:38:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1775547 is not found' 00:05:28.901 Process with pid 1775547 is not found 00:05:28.901 09:38:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.901 00:05:28.901 real 0m15.273s 00:05:28.901 user 0m26.772s 00:05:28.901 sys 0m5.314s 00:05:29.213 09:38:45 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.213 09:38:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.213 ************************************ 00:05:29.213 END TEST cpu_locks 00:05:29.213 ************************************ 00:05:29.213 09:38:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:29.213 00:05:29.213 real 0m39.025s 00:05:29.213 user 1m14.677s 00:05:29.213 sys 0m9.324s 00:05:29.213 09:38:45 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.213 09:38:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.213 ************************************ 00:05:29.213 END TEST event 00:05:29.213 ************************************ 00:05:29.213 09:38:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.213 09:38:45 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:29.213 09:38:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.213 09:38:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.213 09:38:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.213 ************************************ 00:05:29.213 START TEST thread 00:05:29.213 ************************************ 00:05:29.213 09:38:45 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:29.213 * Looking for test storage... 00:05:29.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:29.214 09:38:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:29.214 09:38:45 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:29.214 09:38:45 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.214 09:38:45 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.214 ************************************ 00:05:29.214 START TEST thread_poller_perf 00:05:29.214 ************************************ 00:05:29.214 09:38:45 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:29.214 [2024-07-15 09:38:45.840142] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:29.214 [2024-07-15 09:38:45.840210] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775917 ] 00:05:29.214 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.214 [2024-07-15 09:38:45.872715] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:29.214 [2024-07-15 09:38:45.903645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.475 [2024-07-15 09:38:45.996485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.475 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:30.413 ====================================== 00:05:30.413 busy:2709242127 (cyc) 00:05:30.413 total_run_count: 282000 00:05:30.413 tsc_hz: 2700000000 (cyc) 00:05:30.413 ====================================== 00:05:30.413 poller_cost: 9607 (cyc), 3558 (nsec) 00:05:30.413 00:05:30.413 real 0m1.253s 00:05:30.413 user 0m1.171s 00:05:30.413 sys 0m0.076s 00:05:30.413 09:38:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.413 09:38:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.413 ************************************ 00:05:30.413 END TEST thread_poller_perf 00:05:30.413 ************************************ 00:05:30.413 09:38:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:30.413 09:38:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:30.413 09:38:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:30.413 09:38:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.413 09:38:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.413 ************************************ 00:05:30.413 START TEST thread_poller_perf 00:05:30.413 ************************************ 00:05:30.413 09:38:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:30.413 [2024-07-15 09:38:47.145885] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:30.413 [2024-07-15 09:38:47.145963] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776078 ] 00:05:30.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.413 [2024-07-15 09:38:47.177827] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.673 [2024-07-15 09:38:47.210060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.673 [2024-07-15 09:38:47.300627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.673 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:31.606 ====================================== 00:05:31.606 busy:2702411093 (cyc) 00:05:31.606 total_run_count: 3862000 00:05:31.606 tsc_hz: 2700000000 (cyc) 00:05:31.606 ====================================== 00:05:31.606 poller_cost: 699 (cyc), 258 (nsec) 00:05:31.606 00:05:31.606 real 0m1.253s 00:05:31.606 user 0m1.160s 00:05:31.606 sys 0m0.088s 00:05:31.606 09:38:48 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.606 09:38:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.606 ************************************ 00:05:31.606 END TEST thread_poller_perf 00:05:31.606 ************************************ 00:05:31.864 09:38:48 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:31.864 09:38:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:31.864 00:05:31.864 real 0m2.659s 00:05:31.864 user 0m2.400s 00:05:31.864 sys 0m0.259s 00:05:31.864 09:38:48 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.864 09:38:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.864 ************************************ 00:05:31.864 END TEST thread 00:05:31.864 ************************************ 00:05:31.864 09:38:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.864 09:38:48 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:31.864 09:38:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.864 09:38:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.864 09:38:48 -- common/autotest_common.sh@10 -- # set +x 00:05:31.864 ************************************ 00:05:31.864 START TEST accel 00:05:31.864 ************************************ 00:05:31.864 09:38:48 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:31.864 * Looking for test storage... 00:05:31.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:31.864 09:38:48 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:31.864 09:38:48 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:31.864 09:38:48 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.864 09:38:48 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1776313 00:05:31.864 09:38:48 accel -- accel/accel.sh@63 -- # waitforlisten 1776313 00:05:31.864 09:38:48 accel -- common/autotest_common.sh@829 -- # '[' -z 1776313 ']' 00:05:31.864 09:38:48 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.864 09:38:48 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:31.864 09:38:48 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:31.864 09:38:48 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.864 09:38:48 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.864 09:38:48 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.864 09:38:48 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.864 09:38:48 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.864 09:38:48 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.864 09:38:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.864 09:38:48 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.864 09:38:48 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.864 09:38:48 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:31.864 09:38:48 accel -- accel/accel.sh@41 -- # jq -r . 00:05:31.864 [2024-07-15 09:38:48.566388] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:31.864 [2024-07-15 09:38:48.566482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776313 ] 00:05:31.864 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.864 [2024-07-15 09:38:48.600738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:31.864 [2024-07-15 09:38:48.629532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.121 [2024-07-15 09:38:48.720636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.377 09:38:48 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.377 09:38:48 accel -- common/autotest_common.sh@862 -- # return 0 00:05:32.377 09:38:48 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:32.377 09:38:48 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:32.377 09:38:48 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:32.377 09:38:48 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:32.377 09:38:48 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:32.377 09:38:48 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:32.377 09:38:48 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:32.377 09:38:48 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.377 09:38:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.377 09:38:48 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.377 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.377 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.377 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.377 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.377 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.377 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.377 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.377 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.377 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.377 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.377 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.377 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.377 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.377 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.377 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.377 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.377 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.377 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.378 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.378 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.378 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.378 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.378 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.378 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.378 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.378 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.378 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.378 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.378 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.378 09:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.378 09:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.378 09:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.378 09:38:49 accel -- accel/accel.sh@75 -- # killprocess 1776313 00:05:32.378 09:38:49 accel -- common/autotest_common.sh@948 -- # '[' -z 1776313 ']' 00:05:32.378 09:38:49 accel -- common/autotest_common.sh@952 -- # kill -0 1776313 00:05:32.378 09:38:49 accel -- common/autotest_common.sh@953 -- # uname 00:05:32.378 09:38:49 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.378 09:38:49 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1776313 00:05:32.378 09:38:49 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.378 09:38:49 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.378 09:38:49 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1776313' 00:05:32.378 killing process with pid 1776313 00:05:32.378 09:38:49 accel -- common/autotest_common.sh@967 -- # kill 1776313 00:05:32.378 09:38:49 accel -- common/autotest_common.sh@972 -- # wait 1776313 00:05:32.943 09:38:49 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:32.943 09:38:49 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:32.943 09:38:49 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:32.943 09:38:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.943 09:38:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.943 09:38:49 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:32.943 09:38:49 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:32.943 09:38:49 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:32.943 09:38:49 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.943 09:38:49 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.943 09:38:49 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.943 09:38:49 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.943 09:38:49 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.943 09:38:49 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:32.943 09:38:49 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:32.943 09:38:49 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.943 09:38:49 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:32.943 09:38:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.943 09:38:49 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:32.943 09:38:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:32.943 09:38:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.943 09:38:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.943 ************************************ 00:05:32.943 START TEST accel_missing_filename 00:05:32.943 ************************************ 00:05:32.943 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:32.943 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:32.943 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:32.943 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:32.943 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.943 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:32.943 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.943 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:32.943 09:38:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:32.943 09:38:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:32.943 09:38:49 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.943 09:38:49 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.943 09:38:49 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.943 09:38:49 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.943 09:38:49 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.943 09:38:49 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:32.943 09:38:49 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:32.943 [2024-07-15 09:38:49.554695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:32.943 [2024-07-15 09:38:49.554760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776443 ] 00:05:32.943 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.943 [2024-07-15 09:38:49.587738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:32.943 [2024-07-15 09:38:49.618689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.943 [2024-07-15 09:38:49.711690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.202 [2024-07-15 09:38:49.773580] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.202 [2024-07-15 09:38:49.856917] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:33.202 A filename is required. 00:05:33.202 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:33.202 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.202 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:33.202 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:33.202 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:33.202 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.202 00:05:33.202 real 0m0.403s 00:05:33.202 user 0m0.288s 00:05:33.202 sys 0m0.150s 00:05:33.202 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.202 09:38:49 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:33.202 ************************************ 00:05:33.202 END TEST accel_missing_filename 00:05:33.202 ************************************ 00:05:33.202 09:38:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.202 09:38:49 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.202 09:38:49 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:33.202 09:38:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.202 09:38:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.202 ************************************ 00:05:33.202 START TEST accel_compress_verify 00:05:33.202 ************************************ 00:05:33.202 09:38:49 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.202 09:38:49 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:33.202 09:38:49 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.202 09:38:49 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.202 09:38:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.202 09:38:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.202 09:38:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.202 09:38:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.202 09:38:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.202 09:38:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:33.202 09:38:49 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.202 09:38:49 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.459 09:38:49 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.459 09:38:49 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.459 09:38:49 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.459 09:38:49 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:33.459 09:38:49 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:33.459 [2024-07-15 09:38:49.999976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:33.459 [2024-07-15 09:38:50.000045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776583 ] 00:05:33.459 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.459 [2024-07-15 09:38:50.033006] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.459 [2024-07-15 09:38:50.065064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.459 [2024-07-15 09:38:50.160074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.459 [2024-07-15 09:38:50.221544] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.718 [2024-07-15 09:38:50.305637] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:33.718 00:05:33.718 Compression does not support the verify option, aborting. 00:05:33.718 09:38:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:33.718 09:38:50 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.718 09:38:50 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:33.718 09:38:50 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:33.718 09:38:50 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:33.718 09:38:50 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.718 00:05:33.718 real 0m0.402s 00:05:33.718 user 0m0.297s 00:05:33.718 sys 0m0.142s 00:05:33.718 09:38:50 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.718 09:38:50 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:33.718 ************************************ 00:05:33.718 END TEST accel_compress_verify 00:05:33.718 ************************************ 00:05:33.718 09:38:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.718 09:38:50 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:33.718 09:38:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:33.718 09:38:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.718 09:38:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.718 ************************************ 00:05:33.718 START TEST accel_wrong_workload 00:05:33.718 ************************************ 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:33.718 09:38:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:33.718 09:38:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:33.718 09:38:50 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.718 09:38:50 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.718 09:38:50 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.718 09:38:50 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.718 09:38:50 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.718 09:38:50 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:33.718 09:38:50 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:33.718 Unsupported workload type: foobar 00:05:33.718 [2024-07-15 09:38:50.444864] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:33.718 accel_perf options: 00:05:33.718 [-h help message] 00:05:33.718 [-q queue depth per core] 00:05:33.718 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:33.718 [-T number of threads per core 00:05:33.718 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:33.718 [-t time in seconds] 00:05:33.718 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:33.718 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:33.718 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:33.718 [-l for compress/decompress workloads, name of uncompressed input file 00:05:33.718 [-S for crc32c workload, use this seed value (default 0) 00:05:33.718 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:33.718 [-f for fill workload, use this BYTE value (default 255) 00:05:33.718 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:33.718 [-y verify result if this switch is on] 00:05:33.718 [-a tasks to allocate per core (default: same value as -q)] 00:05:33.718 Can be used to spread operations across a wider range of memory. 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.718 00:05:33.718 real 0m0.022s 00:05:33.718 user 0m0.010s 00:05:33.718 sys 0m0.012s 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.718 09:38:50 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:33.718 ************************************ 00:05:33.718 END TEST accel_wrong_workload 00:05:33.718 ************************************ 00:05:33.718 09:38:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.718 09:38:50 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:33.718 09:38:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:33.718 09:38:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.718 09:38:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.718 Error: writing output failed: Broken pipe 00:05:33.718 ************************************ 00:05:33.718 START TEST accel_negative_buffers 00:05:33.718 ************************************ 00:05:33.718 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:33.718 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:33.718 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:33.718 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.718 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.718 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.718 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.718 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:33.718 09:38:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:33.718 09:38:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:33.718 09:38:50 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.718 09:38:50 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.718 09:38:50 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.718 09:38:50 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.718 09:38:50 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.718 09:38:50 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:33.718 09:38:50 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:33.976 -x option must be non-negative. 00:05:33.976 [2024-07-15 09:38:50.506122] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:33.976 accel_perf options: 00:05:33.976 [-h help message] 00:05:33.976 [-q queue depth per core] 00:05:33.976 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:33.977 [-T number of threads per core 00:05:33.977 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:33.977 [-t time in seconds] 00:05:33.977 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:33.977 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:33.977 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:33.977 [-l for compress/decompress workloads, name of uncompressed input file 00:05:33.977 [-S for crc32c workload, use this seed value (default 0) 00:05:33.977 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:33.977 [-f for fill workload, use this BYTE value (default 255) 00:05:33.977 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:33.977 [-y verify result if this switch is on] 00:05:33.977 [-a tasks to allocate per core (default: same value as -q)] 00:05:33.977 Can be used to spread operations across a wider range of memory. 00:05:33.977 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:33.977 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.977 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.977 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.977 00:05:33.977 real 0m0.024s 00:05:33.977 user 0m0.014s 00:05:33.977 sys 0m0.010s 00:05:33.977 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.977 09:38:50 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:33.977 ************************************ 00:05:33.977 END TEST accel_negative_buffers 00:05:33.977 ************************************ 00:05:33.977 Error: writing output failed: Broken pipe 00:05:33.977 09:38:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.977 09:38:50 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:33.977 09:38:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:33.977 09:38:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.977 09:38:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.977 ************************************ 00:05:33.977 START TEST accel_crc32c 00:05:33.977 ************************************ 00:05:33.977 09:38:50 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:33.977 09:38:50 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:33.977 [2024-07-15 09:38:50.567954] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:33.977 [2024-07-15 09:38:50.568011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776656 ] 00:05:33.977 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.977 [2024-07-15 09:38:50.602661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.977 [2024-07-15 09:38:50.632736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.977 [2024-07-15 09:38:50.725417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.245 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.246 09:38:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:35.177 09:38:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.177 00:05:35.177 real 0m1.403s 00:05:35.177 user 0m1.265s 00:05:35.177 sys 0m0.140s 00:05:35.177 09:38:51 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.177 09:38:51 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:35.177 ************************************ 00:05:35.177 END TEST accel_crc32c 00:05:35.177 ************************************ 00:05:35.435 09:38:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.435 09:38:51 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:35.435 09:38:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:35.435 09:38:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.435 09:38:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.435 ************************************ 00:05:35.435 START TEST accel_crc32c_C2 00:05:35.435 ************************************ 00:05:35.435 09:38:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:35.435 09:38:51 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.435 09:38:51 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:35.435 09:38:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.435 09:38:51 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:35.435 09:38:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.435 09:38:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:35.435 09:38:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.435 09:38:51 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.435 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.435 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.435 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.435 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.435 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:35.435 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:35.435 [2024-07-15 09:38:52.015451] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:35.435 [2024-07-15 09:38:52.015514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776922 ] 00:05:35.435 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.435 [2024-07-15 09:38:52.047765] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.435 [2024-07-15 09:38:52.078062] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.435 [2024-07-15 09:38:52.170381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.693 09:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.625 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.625 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.625 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.625 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.625 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.625 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.625 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.626 00:05:36.626 real 0m1.406s 00:05:36.626 user 0m1.258s 00:05:36.626 sys 0m0.149s 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.626 09:38:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:36.626 ************************************ 00:05:36.626 END TEST accel_crc32c_C2 00:05:36.626 ************************************ 00:05:36.884 09:38:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.884 09:38:53 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:36.884 09:38:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:36.884 09:38:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.884 09:38:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.884 ************************************ 00:05:36.884 START TEST accel_copy 00:05:36.884 ************************************ 00:05:36.884 09:38:53 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:36.884 09:38:53 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:36.884 [2024-07-15 09:38:53.465415] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:36.884 [2024-07-15 09:38:53.465481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777077 ] 00:05:36.884 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.884 [2024-07-15 09:38:53.498744] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.884 [2024-07-15 09:38:53.529131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.884 [2024-07-15 09:38:53.624488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.142 09:38:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.073 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.331 09:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.331 09:38:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.331 09:38:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:38.331 09:38:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.331 00:05:38.331 real 0m1.410s 00:05:38.331 user 0m1.272s 00:05:38.331 sys 0m0.141s 00:05:38.331 09:38:54 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.331 09:38:54 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:38.331 ************************************ 00:05:38.331 END TEST accel_copy 00:05:38.331 ************************************ 00:05:38.331 09:38:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.331 09:38:54 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:38.331 09:38:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:38.331 09:38:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.331 09:38:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.331 ************************************ 00:05:38.331 START TEST accel_fill 00:05:38.331 ************************************ 00:05:38.331 09:38:54 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:38.331 09:38:54 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:38.331 [2024-07-15 09:38:54.919102] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:38.331 [2024-07-15 09:38:54.919168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777241 ] 00:05:38.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.331 [2024-07-15 09:38:54.951753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:38.331 [2024-07-15 09:38:54.982588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.332 [2024-07-15 09:38:55.073786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.590 09:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:39.524 09:38:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.524 00:05:39.524 real 0m1.406s 00:05:39.524 user 0m1.256s 00:05:39.524 sys 0m0.153s 00:05:39.524 09:38:56 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.524 09:38:56 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:39.524 ************************************ 00:05:39.524 END TEST accel_fill 00:05:39.524 ************************************ 00:05:39.783 09:38:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.783 09:38:56 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:39.783 09:38:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:39.783 09:38:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.783 09:38:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.783 ************************************ 00:05:39.783 START TEST accel_copy_crc32c 00:05:39.783 ************************************ 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:39.783 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:39.783 [2024-07-15 09:38:56.376598] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:39.783 [2024-07-15 09:38:56.376660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777395 ] 00:05:39.783 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.783 [2024-07-15 09:38:56.408132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:39.783 [2024-07-15 09:38:56.440322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.784 [2024-07-15 09:38:56.534189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.042 09:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.416 00:05:41.416 real 0m1.409s 00:05:41.416 user 0m1.263s 00:05:41.416 sys 0m0.148s 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.416 09:38:57 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:41.416 ************************************ 00:05:41.416 END TEST accel_copy_crc32c 00:05:41.416 ************************************ 00:05:41.416 09:38:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.416 09:38:57 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:41.416 09:38:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:41.416 09:38:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.416 09:38:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.416 ************************************ 00:05:41.416 START TEST accel_copy_crc32c_C2 00:05:41.416 ************************************ 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:41.416 09:38:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:41.416 [2024-07-15 09:38:57.831646] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:41.416 [2024-07-15 09:38:57.831707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777667 ] 00:05:41.416 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.416 [2024-07-15 09:38:57.863510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:41.416 [2024-07-15 09:38:57.893349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.416 [2024-07-15 09:38:57.985743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.416 09:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.790 00:05:42.790 real 0m1.401s 00:05:42.790 user 0m1.255s 00:05:42.790 sys 0m0.149s 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.790 09:38:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:42.790 ************************************ 00:05:42.790 END TEST accel_copy_crc32c_C2 00:05:42.790 ************************************ 00:05:42.790 09:38:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.790 09:38:59 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:42.790 09:38:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:42.790 09:38:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.790 09:38:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.790 ************************************ 00:05:42.790 START TEST accel_dualcast 00:05:42.790 ************************************ 00:05:42.790 09:38:59 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:42.790 [2024-07-15 09:38:59.275335] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:42.790 [2024-07-15 09:38:59.275399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777824 ] 00:05:42.790 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.790 [2024-07-15 09:38:59.308241] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:42.790 [2024-07-15 09:38:59.338218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.790 [2024-07-15 09:38:59.430664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:42.790 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.791 09:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:44.167 09:39:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.167 00:05:44.167 real 0m1.398s 00:05:44.167 user 0m1.250s 00:05:44.167 sys 0m0.150s 00:05:44.167 09:39:00 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.167 09:39:00 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:44.167 ************************************ 00:05:44.167 END TEST accel_dualcast 00:05:44.167 ************************************ 00:05:44.167 09:39:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.167 09:39:00 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:44.167 09:39:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:44.167 09:39:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.167 09:39:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.167 ************************************ 00:05:44.167 START TEST accel_compare 00:05:44.167 ************************************ 00:05:44.167 09:39:00 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:44.167 [2024-07-15 09:39:00.718356] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:44.167 [2024-07-15 09:39:00.718417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778036 ] 00:05:44.167 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.167 [2024-07-15 09:39:00.749499] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:44.167 [2024-07-15 09:39:00.780996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.167 [2024-07-15 09:39:00.871564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.167 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.168 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.168 09:39:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.168 09:39:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.168 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.168 09:39:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:45.597 09:39:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.597 00:05:45.597 real 0m1.397s 00:05:45.597 user 0m1.258s 00:05:45.597 sys 0m0.142s 00:05:45.597 09:39:02 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.597 09:39:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:45.597 ************************************ 00:05:45.597 END TEST accel_compare 00:05:45.597 ************************************ 00:05:45.597 09:39:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.597 09:39:02 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:45.597 09:39:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:45.597 09:39:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.597 09:39:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.597 ************************************ 00:05:45.597 START TEST accel_xor 00:05:45.597 ************************************ 00:05:45.597 09:39:02 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:45.597 09:39:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:45.597 [2024-07-15 09:39:02.166652] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:45.597 [2024-07-15 09:39:02.166714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778347 ] 00:05:45.597 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.597 [2024-07-15 09:39:02.198705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:45.597 [2024-07-15 09:39:02.230725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.597 [2024-07-15 09:39:02.323130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.856 09:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:46.790 09:39:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.790 00:05:46.790 real 0m1.415s 00:05:46.790 user 0m1.271s 00:05:46.790 sys 0m0.146s 00:05:46.790 09:39:03 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.790 09:39:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:46.790 ************************************ 00:05:46.790 END TEST accel_xor 00:05:46.790 ************************************ 00:05:47.048 09:39:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.048 09:39:03 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:47.049 09:39:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:47.049 09:39:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.049 09:39:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.049 ************************************ 00:05:47.049 START TEST accel_xor 00:05:47.049 ************************************ 00:05:47.049 09:39:03 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:47.049 09:39:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:47.049 [2024-07-15 09:39:03.625552] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:47.049 [2024-07-15 09:39:03.625615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778521 ] 00:05:47.049 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.049 [2024-07-15 09:39:03.658508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:47.049 [2024-07-15 09:39:03.688633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.049 [2024-07-15 09:39:03.781332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.307 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.308 09:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.242 09:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.243 09:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.243 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.243 09:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.243 09:39:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.243 09:39:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:48.243 09:39:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.243 00:05:48.243 real 0m1.411s 00:05:48.243 user 0m1.274s 00:05:48.243 sys 0m0.139s 00:05:48.243 09:39:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.243 09:39:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:48.243 ************************************ 00:05:48.243 END TEST accel_xor 00:05:48.243 ************************************ 00:05:48.501 09:39:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.501 09:39:05 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:48.501 09:39:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:48.501 09:39:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.501 09:39:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.501 ************************************ 00:05:48.501 START TEST accel_dif_verify 00:05:48.501 ************************************ 00:05:48.501 09:39:05 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:48.501 09:39:05 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:48.501 09:39:05 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:48.501 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.501 09:39:05 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:48.502 [2024-07-15 09:39:05.079443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:48.502 [2024-07-15 09:39:05.079514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778696 ] 00:05:48.502 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.502 [2024-07-15 09:39:05.111555] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.502 [2024-07-15 09:39:05.141370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.502 [2024-07-15 09:39:05.232925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.502 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.760 09:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:49.695 09:39:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.695 00:05:49.695 real 0m1.391s 00:05:49.695 user 0m1.256s 00:05:49.695 sys 0m0.138s 00:05:49.695 09:39:06 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.695 09:39:06 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:49.695 ************************************ 00:05:49.695 END TEST accel_dif_verify 00:05:49.695 ************************************ 00:05:49.954 09:39:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.954 09:39:06 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:49.954 09:39:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:49.954 09:39:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.954 09:39:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.954 ************************************ 00:05:49.954 START TEST accel_dif_generate 00:05:49.954 ************************************ 00:05:49.954 09:39:06 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:49.954 09:39:06 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:49.954 [2024-07-15 09:39:06.522422] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:49.954 [2024-07-15 09:39:06.522489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778854 ] 00:05:49.954 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.954 [2024-07-15 09:39:06.555589] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.954 [2024-07-15 09:39:06.586186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.954 [2024-07-15 09:39:06.678143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:50.212 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.213 09:39:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:51.147 09:39:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.147 00:05:51.147 real 0m1.396s 00:05:51.147 user 0m1.253s 00:05:51.147 sys 0m0.146s 00:05:51.147 09:39:07 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.147 09:39:07 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:51.147 ************************************ 00:05:51.147 END TEST accel_dif_generate 00:05:51.147 ************************************ 00:05:51.147 09:39:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.147 09:39:07 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:51.147 09:39:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:51.147 09:39:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.147 09:39:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.406 ************************************ 00:05:51.406 START TEST accel_dif_generate_copy 00:05:51.406 ************************************ 00:05:51.406 09:39:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:51.406 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:51.406 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:51.406 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.406 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:51.406 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.406 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:51.407 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:51.407 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.407 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.407 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.407 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.407 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.407 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:51.407 09:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:51.407 [2024-07-15 09:39:07.958584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:51.407 [2024-07-15 09:39:07.958651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779375 ] 00:05:51.407 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.407 [2024-07-15 09:39:07.990565] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:51.407 [2024-07-15 09:39:08.020713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.407 [2024-07-15 09:39:08.112417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.407 09:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.778 00:05:52.778 real 0m1.390s 00:05:52.778 user 0m1.250s 00:05:52.778 sys 0m0.141s 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.778 09:39:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:52.778 ************************************ 00:05:52.778 END TEST accel_dif_generate_copy 00:05:52.778 ************************************ 00:05:52.778 09:39:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.778 09:39:09 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:52.778 09:39:09 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.778 09:39:09 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:52.778 09:39:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.778 09:39:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.778 ************************************ 00:05:52.778 START TEST accel_comp 00:05:52.778 ************************************ 00:05:52.778 09:39:09 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.778 09:39:09 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:52.778 09:39:09 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:52.778 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.778 09:39:09 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.778 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.779 09:39:09 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.779 09:39:09 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:52.779 09:39:09 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.779 09:39:09 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.779 09:39:09 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.779 09:39:09 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.779 09:39:09 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.779 09:39:09 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:52.779 09:39:09 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:52.779 [2024-07-15 09:39:09.392597] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:52.779 [2024-07-15 09:39:09.392661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779781 ] 00:05:52.779 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.779 [2024-07-15 09:39:09.425605] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.779 [2024-07-15 09:39:09.456462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.779 [2024-07-15 09:39:09.546372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.036 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.037 09:39:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.406 09:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.406 09:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.406 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.406 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:54.407 09:39:10 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.407 00:05:54.407 real 0m1.408s 00:05:54.407 user 0m1.274s 00:05:54.407 sys 0m0.137s 00:05:54.407 09:39:10 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.407 09:39:10 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:54.407 ************************************ 00:05:54.407 END TEST accel_comp 00:05:54.407 ************************************ 00:05:54.407 09:39:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.407 09:39:10 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:54.407 09:39:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:54.407 09:39:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.407 09:39:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.407 ************************************ 00:05:54.407 START TEST accel_decomp 00:05:54.407 ************************************ 00:05:54.407 09:39:10 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:54.407 09:39:10 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:54.407 [2024-07-15 09:39:10.843334] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:54.407 [2024-07-15 09:39:10.843396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779945 ] 00:05:54.407 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.407 [2024-07-15 09:39:10.875645] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:54.407 [2024-07-15 09:39:10.905556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.407 [2024-07-15 09:39:10.997683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.407 09:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:55.779 09:39:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.779 00:05:55.779 real 0m1.401s 00:05:55.779 user 0m1.264s 00:05:55.779 sys 0m0.140s 00:05:55.779 09:39:12 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.779 09:39:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:55.779 ************************************ 00:05:55.779 END TEST accel_decomp 00:05:55.779 ************************************ 00:05:55.779 09:39:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.779 09:39:12 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:55.779 09:39:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:55.779 09:39:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.779 09:39:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.779 ************************************ 00:05:55.779 START TEST accel_decomp_full 00:05:55.779 ************************************ 00:05:55.779 09:39:12 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:55.779 09:39:12 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:55.779 [2024-07-15 09:39:12.289173] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:55.779 [2024-07-15 09:39:12.289240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780195 ] 00:05:55.779 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.780 [2024-07-15 09:39:12.321367] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.780 [2024-07-15 09:39:12.351408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.780 [2024-07-15 09:39:12.444283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.780 09:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:57.153 09:39:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.153 00:05:57.153 real 0m1.428s 00:05:57.153 user 0m1.278s 00:05:57.153 sys 0m0.153s 00:05:57.153 09:39:13 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.153 09:39:13 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:57.153 ************************************ 00:05:57.153 END TEST accel_decomp_full 00:05:57.153 ************************************ 00:05:57.153 09:39:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.153 09:39:13 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:57.153 09:39:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:57.153 09:39:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.153 09:39:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.153 ************************************ 00:05:57.153 START TEST accel_decomp_mcore 00:05:57.153 ************************************ 00:05:57.153 09:39:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:57.153 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:57.153 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:57.153 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.153 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:57.153 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.154 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:57.154 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:57.154 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.154 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.154 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.154 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.154 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.154 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:57.154 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:57.154 [2024-07-15 09:39:13.759488] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:57.154 [2024-07-15 09:39:13.759547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780374 ] 00:05:57.154 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.154 [2024-07-15 09:39:13.791526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:57.154 [2024-07-15 09:39:13.820935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.154 [2024-07-15 09:39:13.916952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.154 [2024-07-15 09:39:13.917023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.154 [2024-07-15 09:39:13.917122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.154 [2024-07-15 09:39:13.917125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.413 09:39:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.787 00:05:58.787 real 0m1.418s 00:05:58.787 user 0m4.735s 00:05:58.787 sys 0m0.142s 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.787 09:39:15 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:58.787 ************************************ 00:05:58.787 END TEST accel_decomp_mcore 00:05:58.787 ************************************ 00:05:58.787 09:39:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.787 09:39:15 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.787 09:39:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:58.787 09:39:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.787 09:39:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.787 ************************************ 00:05:58.787 START TEST accel_decomp_full_mcore 00:05:58.787 ************************************ 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:58.787 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:58.787 [2024-07-15 09:39:15.223920] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:58.787 [2024-07-15 09:39:15.223979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780536 ] 00:05:58.787 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.787 [2024-07-15 09:39:15.256386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.787 [2024-07-15 09:39:15.287951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.787 [2024-07-15 09:39:15.384312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.787 [2024-07-15 09:39:15.384384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.788 [2024-07-15 09:39:15.384485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.788 [2024-07-15 09:39:15.384488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.788 09:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.161 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.162 00:06:00.162 real 0m1.424s 00:06:00.162 user 0m4.748s 00:06:00.162 sys 0m0.153s 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.162 09:39:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:00.162 ************************************ 00:06:00.162 END TEST accel_decomp_full_mcore 00:06:00.162 ************************************ 00:06:00.162 09:39:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.162 09:39:16 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:00.162 09:39:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:00.162 09:39:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.162 09:39:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.162 ************************************ 00:06:00.162 START TEST accel_decomp_mthread 00:06:00.162 ************************************ 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:00.162 [2024-07-15 09:39:16.691050] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:00.162 [2024-07-15 09:39:16.691104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780694 ] 00:06:00.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.162 [2024-07-15 09:39:16.723655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.162 [2024-07-15 09:39:16.754642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.162 [2024-07-15 09:39:16.850560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.162 09:39:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.560 00:06:01.560 real 0m1.401s 00:06:01.560 user 0m1.265s 00:06:01.560 sys 0m0.139s 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.560 09:39:18 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:01.560 ************************************ 00:06:01.560 END TEST accel_decomp_mthread 00:06:01.560 ************************************ 00:06:01.560 09:39:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.560 09:39:18 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:01.560 09:39:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:01.560 09:39:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.560 09:39:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.561 ************************************ 00:06:01.561 START TEST accel_decomp_full_mthread 00:06:01.561 ************************************ 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:01.561 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:01.561 [2024-07-15 09:39:18.142744] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:01.561 [2024-07-15 09:39:18.142814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780973 ] 00:06:01.561 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.561 [2024-07-15 09:39:18.173856] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.561 [2024-07-15 09:39:18.207512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.561 [2024-07-15 09:39:18.299999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.852 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.853 09:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.228 00:06:03.228 real 0m1.453s 00:06:03.228 user 0m1.305s 00:06:03.228 sys 0m0.151s 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.228 09:39:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:03.228 ************************************ 00:06:03.228 END TEST accel_decomp_full_mthread 00:06:03.228 ************************************ 00:06:03.228 09:39:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.228 09:39:19 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:03.228 09:39:19 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:03.228 09:39:19 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:03.228 09:39:19 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:03.228 09:39:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.228 09:39:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.228 09:39:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.228 09:39:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.228 09:39:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.228 09:39:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.228 09:39:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.228 09:39:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:03.228 09:39:19 accel -- accel/accel.sh@41 -- # jq -r . 00:06:03.228 ************************************ 00:06:03.228 START TEST accel_dif_functional_tests 00:06:03.228 ************************************ 00:06:03.228 09:39:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:03.228 [2024-07-15 09:39:19.661891] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:03.228 [2024-07-15 09:39:19.661972] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781128 ] 00:06:03.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.228 [2024-07-15 09:39:19.693316] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.228 [2024-07-15 09:39:19.723321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.228 [2024-07-15 09:39:19.817913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.228 [2024-07-15 09:39:19.817984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.228 [2024-07-15 09:39:19.817988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.228 00:06:03.228 00:06:03.228 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.228 http://cunit.sourceforge.net/ 00:06:03.228 00:06:03.228 00:06:03.228 Suite: accel_dif 00:06:03.228 Test: verify: DIF generated, GUARD check ...passed 00:06:03.228 Test: verify: DIF generated, APPTAG check ...passed 00:06:03.228 Test: verify: DIF generated, REFTAG check ...passed 00:06:03.228 Test: verify: DIF not generated, GUARD check ...[2024-07-15 09:39:19.910483] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:03.228 passed 00:06:03.228 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 09:39:19.910564] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:03.228 passed 00:06:03.228 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 09:39:19.910595] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:03.228 passed 00:06:03.228 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:03.228 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 09:39:19.910652] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:03.228 passed 00:06:03.228 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:03.228 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:03.228 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:03.228 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 09:39:19.910788] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:03.228 passed 00:06:03.228 Test: verify copy: DIF generated, GUARD check ...passed 00:06:03.228 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:03.228 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:03.228 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 09:39:19.910950] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:03.228 passed 00:06:03.228 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 09:39:19.910986] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:03.228 passed 00:06:03.228 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 09:39:19.911019] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:03.228 passed 00:06:03.228 Test: generate copy: DIF generated, GUARD check ...passed 00:06:03.228 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:03.228 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:03.229 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:03.229 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:03.229 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:03.229 Test: generate copy: iovecs-len validate ...[2024-07-15 09:39:19.911230] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:03.229 passed 00:06:03.229 Test: generate copy: buffer alignment validate ...passed 00:06:03.229 00:06:03.229 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.229 suites 1 1 n/a 0 0 00:06:03.229 tests 26 26 26 0 0 00:06:03.229 asserts 115 115 115 0 n/a 00:06:03.229 00:06:03.229 Elapsed time = 0.002 seconds 00:06:03.486 00:06:03.486 real 0m0.499s 00:06:03.486 user 0m0.775s 00:06:03.486 sys 0m0.182s 00:06:03.486 09:39:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.486 09:39:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:03.487 ************************************ 00:06:03.487 END TEST accel_dif_functional_tests 00:06:03.487 ************************************ 00:06:03.487 09:39:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.487 00:06:03.487 real 0m31.684s 00:06:03.487 user 0m35.143s 00:06:03.487 sys 0m4.557s 00:06:03.487 09:39:20 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.487 09:39:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.487 ************************************ 00:06:03.487 END TEST accel 00:06:03.487 ************************************ 00:06:03.487 09:39:20 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.487 09:39:20 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:03.487 09:39:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.487 09:39:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.487 09:39:20 -- common/autotest_common.sh@10 -- # set +x 00:06:03.487 ************************************ 00:06:03.487 START TEST accel_rpc 00:06:03.487 ************************************ 00:06:03.487 09:39:20 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:03.487 * Looking for test storage... 00:06:03.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:03.487 09:39:20 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.487 09:39:20 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1781317 00:06:03.487 09:39:20 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:03.487 09:39:20 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1781317 00:06:03.487 09:39:20 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1781317 ']' 00:06:03.487 09:39:20 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.487 09:39:20 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.487 09:39:20 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.487 09:39:20 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.487 09:39:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.745 [2024-07-15 09:39:20.290430] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:03.745 [2024-07-15 09:39:20.290515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781317 ] 00:06:03.745 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.745 [2024-07-15 09:39:20.322211] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.745 [2024-07-15 09:39:20.349720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.745 [2024-07-15 09:39:20.433681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.745 09:39:20 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.745 09:39:20 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:03.745 09:39:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:03.745 09:39:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:03.745 09:39:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:03.745 09:39:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:03.745 09:39:20 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:03.745 09:39:20 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.745 09:39:20 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.745 09:39:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.745 ************************************ 00:06:03.745 START TEST accel_assign_opcode 00:06:03.745 ************************************ 00:06:03.745 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:03.745 09:39:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:03.745 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.745 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.745 [2024-07-15 09:39:20.526379] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:04.004 [2024-07-15 09:39:20.534392] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:04.004 09:39:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:04.263 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.263 software 00:06:04.263 00:06:04.263 real 0m0.297s 00:06:04.263 user 0m0.042s 00:06:04.263 sys 0m0.004s 00:06:04.263 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.263 09:39:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:04.263 ************************************ 00:06:04.263 END TEST accel_assign_opcode 00:06:04.263 ************************************ 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:04.263 09:39:20 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1781317 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1781317 ']' 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1781317 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1781317 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1781317' 00:06:04.263 killing process with pid 1781317 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@967 -- # kill 1781317 00:06:04.263 09:39:20 accel_rpc -- common/autotest_common.sh@972 -- # wait 1781317 00:06:04.522 00:06:04.522 real 0m1.090s 00:06:04.522 user 0m1.023s 00:06:04.522 sys 0m0.420s 00:06:04.522 09:39:21 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.522 09:39:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.522 ************************************ 00:06:04.522 END TEST accel_rpc 00:06:04.522 ************************************ 00:06:04.522 09:39:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:04.522 09:39:21 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:04.522 09:39:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.522 09:39:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.522 09:39:21 -- common/autotest_common.sh@10 -- # set +x 00:06:04.779 ************************************ 00:06:04.779 START TEST app_cmdline 00:06:04.779 ************************************ 00:06:04.779 09:39:21 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:04.779 * Looking for test storage... 00:06:04.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:04.779 09:39:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:04.779 09:39:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1781521 00:06:04.779 09:39:21 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:04.779 09:39:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1781521 00:06:04.779 09:39:21 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1781521 ']' 00:06:04.779 09:39:21 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.779 09:39:21 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.779 09:39:21 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.779 09:39:21 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.779 09:39:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.779 [2024-07-15 09:39:21.426140] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:04.779 [2024-07-15 09:39:21.426239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781521 ] 00:06:04.779 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.779 [2024-07-15 09:39:21.457741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.779 [2024-07-15 09:39:21.485017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.037 [2024-07-15 09:39:21.571933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.295 09:39:21 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.295 09:39:21 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:05.295 09:39:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:05.295 { 00:06:05.295 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:06:05.295 "fields": { 00:06:05.295 "major": 24, 00:06:05.295 "minor": 9, 00:06:05.295 "patch": 0, 00:06:05.295 "suffix": "-pre", 00:06:05.295 "commit": "719d03c6a" 00:06:05.295 } 00:06:05.295 } 00:06:05.295 09:39:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:05.295 09:39:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:05.295 09:39:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:05.295 09:39:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:05.295 09:39:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:05.295 09:39:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:05.295 09:39:22 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.295 09:39:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.295 09:39:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:05.295 09:39:22 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.553 09:39:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:05.553 09:39:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:05.553 09:39:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:05.553 09:39:22 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.553 request: 00:06:05.553 { 00:06:05.553 "method": "env_dpdk_get_mem_stats", 00:06:05.553 "req_id": 1 00:06:05.553 } 00:06:05.553 Got JSON-RPC error response 00:06:05.553 response: 00:06:05.553 { 00:06:05.553 "code": -32601, 00:06:05.553 "message": "Method not found" 00:06:05.553 } 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.811 09:39:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1781521 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1781521 ']' 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1781521 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1781521 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1781521' 00:06:05.811 killing process with pid 1781521 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@967 -- # kill 1781521 00:06:05.811 09:39:22 app_cmdline -- common/autotest_common.sh@972 -- # wait 1781521 00:06:06.069 00:06:06.069 real 0m1.441s 00:06:06.069 user 0m1.759s 00:06:06.069 sys 0m0.461s 00:06:06.069 09:39:22 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.069 09:39:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.069 ************************************ 00:06:06.069 END TEST app_cmdline 00:06:06.069 ************************************ 00:06:06.069 09:39:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.069 09:39:22 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:06.069 09:39:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.069 09:39:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.069 09:39:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.069 ************************************ 00:06:06.069 START TEST version 00:06:06.069 ************************************ 00:06:06.069 09:39:22 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:06.069 * Looking for test storage... 00:06:06.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:06.328 09:39:22 version -- app/version.sh@17 -- # get_header_version major 00:06:06.328 09:39:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.328 09:39:22 version -- app/version.sh@14 -- # cut -f2 00:06:06.328 09:39:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.328 09:39:22 version -- app/version.sh@17 -- # major=24 00:06:06.328 09:39:22 version -- app/version.sh@18 -- # get_header_version minor 00:06:06.328 09:39:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.328 09:39:22 version -- app/version.sh@14 -- # cut -f2 00:06:06.328 09:39:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.328 09:39:22 version -- app/version.sh@18 -- # minor=9 00:06:06.328 09:39:22 version -- app/version.sh@19 -- # get_header_version patch 00:06:06.328 09:39:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.328 09:39:22 version -- app/version.sh@14 -- # cut -f2 00:06:06.328 09:39:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.328 09:39:22 version -- app/version.sh@19 -- # patch=0 00:06:06.328 09:39:22 version -- app/version.sh@20 -- # get_header_version suffix 00:06:06.328 09:39:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.328 09:39:22 version -- app/version.sh@14 -- # cut -f2 00:06:06.328 09:39:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.328 09:39:22 version -- app/version.sh@20 -- # suffix=-pre 00:06:06.328 09:39:22 version -- app/version.sh@22 -- # version=24.9 00:06:06.328 09:39:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:06.328 09:39:22 version -- app/version.sh@28 -- # version=24.9rc0 00:06:06.328 09:39:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:06.328 09:39:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:06.328 09:39:22 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:06.328 09:39:22 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:06.328 00:06:06.328 real 0m0.107s 00:06:06.328 user 0m0.047s 00:06:06.328 sys 0m0.082s 00:06:06.328 09:39:22 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.328 09:39:22 version -- common/autotest_common.sh@10 -- # set +x 00:06:06.328 ************************************ 00:06:06.328 END TEST version 00:06:06.328 ************************************ 00:06:06.328 09:39:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.328 09:39:22 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:06.328 09:39:22 -- spdk/autotest.sh@198 -- # uname -s 00:06:06.328 09:39:22 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:06.328 09:39:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:06.328 09:39:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:06.328 09:39:22 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:06.329 09:39:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:06.329 09:39:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:06.329 09:39:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.329 09:39:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.329 09:39:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:06.329 09:39:22 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:06.329 09:39:22 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:06.329 09:39:22 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:06.329 09:39:22 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:06.329 09:39:22 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:06.329 09:39:22 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:06.329 09:39:22 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:06.329 09:39:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.329 09:39:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.329 ************************************ 00:06:06.329 START TEST nvmf_tcp 00:06:06.329 ************************************ 00:06:06.329 09:39:22 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:06.329 * Looking for test storage... 00:06:06.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.329 09:39:23 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.329 09:39:23 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.329 09:39:23 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.329 09:39:23 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.329 09:39:23 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.329 09:39:23 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.329 09:39:23 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:06.329 09:39:23 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:06.329 09:39:23 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.329 09:39:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:06.329 09:39:23 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:06.329 09:39:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:06.329 09:39:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.329 09:39:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.329 ************************************ 00:06:06.329 START TEST nvmf_example 00:06:06.329 ************************************ 00:06:06.329 09:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:06.588 * Looking for test storage... 00:06:06.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:06.589 09:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:08.491 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:08.491 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:08.491 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:08.491 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:08.492 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:08.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:08.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:06:08.492 00:06:08.492 --- 10.0.0.2 ping statistics --- 00:06:08.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.492 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:08.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:08.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:06:08.492 00:06:08.492 --- 10.0.0.1 ping statistics --- 00:06:08.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.492 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:08.492 09:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:08.750 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:08.750 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:08.750 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.750 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.750 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:08.750 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:08.750 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1783422 00:06:08.751 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:08.751 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:08.751 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1783422 00:06:08.751 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1783422 ']' 00:06:08.751 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.751 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.751 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.751 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.751 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.751 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:09.009 09:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:09.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.202 Initializing NVMe Controllers 00:06:21.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:21.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:21.202 Initialization complete. Launching workers. 00:06:21.202 ======================================================== 00:06:21.202 Latency(us) 00:06:21.202 Device Information : IOPS MiB/s Average min max 00:06:21.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13812.90 53.96 4633.24 894.85 16256.32 00:06:21.202 ======================================================== 00:06:21.202 Total : 13812.90 53.96 4633.24 894.85 16256.32 00:06:21.202 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:21.202 rmmod nvme_tcp 00:06:21.202 rmmod nvme_fabrics 00:06:21.202 rmmod nvme_keyring 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1783422 ']' 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1783422 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1783422 ']' 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1783422 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1783422 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1783422' 00:06:21.202 killing process with pid 1783422 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1783422 00:06:21.202 09:39:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1783422 00:06:21.202 nvmf threads initialize successfully 00:06:21.202 bdev subsystem init successfully 00:06:21.202 created a nvmf target service 00:06:21.202 create targets's poll groups done 00:06:21.202 all subsystems of target started 00:06:21.202 nvmf target is running 00:06:21.202 all subsystems of target stopped 00:06:21.202 destroy targets's poll groups done 00:06:21.202 destroyed the nvmf target service 00:06:21.202 bdev subsystem finish successfully 00:06:21.202 nvmf threads destroy successfully 00:06:21.202 09:39:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:21.202 09:39:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:21.202 09:39:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:21.202 09:39:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:21.202 09:39:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:21.202 09:39:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.202 09:39:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.202 09:39:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.460 09:39:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:21.460 09:39:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:21.460 09:39:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.460 09:39:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:21.460 00:06:21.460 real 0m15.105s 00:06:21.460 user 0m38.338s 00:06:21.460 sys 0m4.544s 00:06:21.460 09:39:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.460 09:39:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:21.460 ************************************ 00:06:21.460 END TEST nvmf_example 00:06:21.460 ************************************ 00:06:21.460 09:39:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:21.460 09:39:38 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:21.460 09:39:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:21.460 09:39:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.460 09:39:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.460 ************************************ 00:06:21.460 START TEST nvmf_filesystem 00:06:21.460 ************************************ 00:06:21.460 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:21.720 * Looking for test storage... 00:06:21.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:21.720 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:21.721 #define SPDK_CONFIG_H 00:06:21.721 #define SPDK_CONFIG_APPS 1 00:06:21.721 #define SPDK_CONFIG_ARCH native 00:06:21.721 #undef SPDK_CONFIG_ASAN 00:06:21.721 #undef SPDK_CONFIG_AVAHI 00:06:21.721 #undef SPDK_CONFIG_CET 00:06:21.721 #define SPDK_CONFIG_COVERAGE 1 00:06:21.721 #define SPDK_CONFIG_CROSS_PREFIX 00:06:21.721 #undef SPDK_CONFIG_CRYPTO 00:06:21.721 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:21.721 #undef SPDK_CONFIG_CUSTOMOCF 00:06:21.721 #undef SPDK_CONFIG_DAOS 00:06:21.721 #define SPDK_CONFIG_DAOS_DIR 00:06:21.721 #define SPDK_CONFIG_DEBUG 1 00:06:21.721 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:21.721 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:21.721 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:06:21.721 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:21.721 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:21.721 #undef SPDK_CONFIG_DPDK_UADK 00:06:21.721 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:21.721 #define SPDK_CONFIG_EXAMPLES 1 00:06:21.721 #undef SPDK_CONFIG_FC 00:06:21.721 #define SPDK_CONFIG_FC_PATH 00:06:21.721 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:21.721 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:21.721 #undef SPDK_CONFIG_FUSE 00:06:21.721 #undef SPDK_CONFIG_FUZZER 00:06:21.721 #define SPDK_CONFIG_FUZZER_LIB 00:06:21.721 #undef SPDK_CONFIG_GOLANG 00:06:21.721 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:21.721 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:21.721 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:21.721 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:21.721 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:21.721 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:21.721 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:21.721 #define SPDK_CONFIG_IDXD 1 00:06:21.721 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:21.721 #undef SPDK_CONFIG_IPSEC_MB 00:06:21.721 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:21.721 #define SPDK_CONFIG_ISAL 1 00:06:21.721 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:21.721 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:21.721 #define SPDK_CONFIG_LIBDIR 00:06:21.721 #undef SPDK_CONFIG_LTO 00:06:21.721 #define SPDK_CONFIG_MAX_LCORES 128 00:06:21.721 #define SPDK_CONFIG_NVME_CUSE 1 00:06:21.721 #undef SPDK_CONFIG_OCF 00:06:21.721 #define SPDK_CONFIG_OCF_PATH 00:06:21.721 #define SPDK_CONFIG_OPENSSL_PATH 00:06:21.721 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:21.721 #define SPDK_CONFIG_PGO_DIR 00:06:21.721 #undef SPDK_CONFIG_PGO_USE 00:06:21.721 #define SPDK_CONFIG_PREFIX /usr/local 00:06:21.721 #undef SPDK_CONFIG_RAID5F 00:06:21.721 #undef SPDK_CONFIG_RBD 00:06:21.721 #define SPDK_CONFIG_RDMA 1 00:06:21.721 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:21.721 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:21.721 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:21.721 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:21.721 #define SPDK_CONFIG_SHARED 1 00:06:21.721 #undef SPDK_CONFIG_SMA 00:06:21.721 #define SPDK_CONFIG_TESTS 1 00:06:21.721 #undef SPDK_CONFIG_TSAN 00:06:21.721 #define SPDK_CONFIG_UBLK 1 00:06:21.721 #define SPDK_CONFIG_UBSAN 1 00:06:21.721 #undef SPDK_CONFIG_UNIT_TESTS 00:06:21.721 #undef SPDK_CONFIG_URING 00:06:21.721 #define SPDK_CONFIG_URING_PATH 00:06:21.721 #undef SPDK_CONFIG_URING_ZNS 00:06:21.721 #undef SPDK_CONFIG_USDT 00:06:21.721 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:21.721 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:21.721 #define SPDK_CONFIG_VFIO_USER 1 00:06:21.721 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:21.721 #define SPDK_CONFIG_VHOST 1 00:06:21.721 #define SPDK_CONFIG_VIRTIO 1 00:06:21.721 #undef SPDK_CONFIG_VTUNE 00:06:21.721 #define SPDK_CONFIG_VTUNE_DIR 00:06:21.721 #define SPDK_CONFIG_WERROR 1 00:06:21.721 #define SPDK_CONFIG_WPDK_DIR 00:06:21.721 #undef SPDK_CONFIG_XNVME 00:06:21.721 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.721 09:39:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:21.722 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1785039 ]] 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1785039 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.HMXfG8 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:21.723 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HMXfG8/tests/target /tmp/spdk.HMXfG8 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=54043717632 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7950974976 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996492288 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=856064 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:21.724 * Looking for test storage... 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=54043717632 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10165567488 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.724 09:39:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:21.725 09:39:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:24.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:24.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:24.255 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:24.256 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:24.256 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:24.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:06:24.256 00:06:24.256 --- 10.0.0.2 ping statistics --- 00:06:24.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.256 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:06:24.256 00:06:24.256 --- 10.0.0.1 ping statistics --- 00:06:24.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.256 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.256 ************************************ 00:06:24.256 START TEST nvmf_filesystem_no_in_capsule 00:06:24.256 ************************************ 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1786740 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1786740 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1786740 ']' 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.256 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.257 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.257 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.257 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.257 [2024-07-15 09:39:40.702456] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:24.257 [2024-07-15 09:39:40.702535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.257 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.257 [2024-07-15 09:39:40.740265] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.257 [2024-07-15 09:39:40.770283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.257 [2024-07-15 09:39:40.864352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.257 [2024-07-15 09:39:40.864418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.257 [2024-07-15 09:39:40.864435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.257 [2024-07-15 09:39:40.864449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.257 [2024-07-15 09:39:40.864460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.257 [2024-07-15 09:39:40.864540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.257 [2024-07-15 09:39:40.864597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.257 [2024-07-15 09:39:40.864639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.257 [2024-07-15 09:39:40.864641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.257 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.257 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:24.257 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:24.257 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.257 09:39:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.257 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.257 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:24.257 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:24.257 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.257 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.257 [2024-07-15 09:39:41.017795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.257 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.257 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:24.257 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.257 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.515 Malloc1 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.515 [2024-07-15 09:39:41.198268] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:24.515 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:24.516 { 00:06:24.516 "name": "Malloc1", 00:06:24.516 "aliases": [ 00:06:24.516 "890e6a82-051b-42fe-9b05-acb22d781dd4" 00:06:24.516 ], 00:06:24.516 "product_name": "Malloc disk", 00:06:24.516 "block_size": 512, 00:06:24.516 "num_blocks": 1048576, 00:06:24.516 "uuid": "890e6a82-051b-42fe-9b05-acb22d781dd4", 00:06:24.516 "assigned_rate_limits": { 00:06:24.516 "rw_ios_per_sec": 0, 00:06:24.516 "rw_mbytes_per_sec": 0, 00:06:24.516 "r_mbytes_per_sec": 0, 00:06:24.516 "w_mbytes_per_sec": 0 00:06:24.516 }, 00:06:24.516 "claimed": true, 00:06:24.516 "claim_type": "exclusive_write", 00:06:24.516 "zoned": false, 00:06:24.516 "supported_io_types": { 00:06:24.516 "read": true, 00:06:24.516 "write": true, 00:06:24.516 "unmap": true, 00:06:24.516 "flush": true, 00:06:24.516 "reset": true, 00:06:24.516 "nvme_admin": false, 00:06:24.516 "nvme_io": false, 00:06:24.516 "nvme_io_md": false, 00:06:24.516 "write_zeroes": true, 00:06:24.516 "zcopy": true, 00:06:24.516 "get_zone_info": false, 00:06:24.516 "zone_management": false, 00:06:24.516 "zone_append": false, 00:06:24.516 "compare": false, 00:06:24.516 "compare_and_write": false, 00:06:24.516 "abort": true, 00:06:24.516 "seek_hole": false, 00:06:24.516 "seek_data": false, 00:06:24.516 "copy": true, 00:06:24.516 "nvme_iov_md": false 00:06:24.516 }, 00:06:24.516 "memory_domains": [ 00:06:24.516 { 00:06:24.516 "dma_device_id": "system", 00:06:24.516 "dma_device_type": 1 00:06:24.516 }, 00:06:24.516 { 00:06:24.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.516 "dma_device_type": 2 00:06:24.516 } 00:06:24.516 ], 00:06:24.516 "driver_specific": {} 00:06:24.516 } 00:06:24.516 ]' 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:24.516 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:25.449 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:25.449 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:25.449 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:25.449 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:25.449 09:39:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:27.412 09:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:27.669 09:39:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:28.602 09:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:29.535 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:29.535 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:29.535 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:29.535 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.535 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.535 ************************************ 00:06:29.535 START TEST filesystem_ext4 00:06:29.535 ************************************ 00:06:29.535 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:29.535 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:29.535 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:29.535 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:29.535 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:29.536 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:29.536 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:29.536 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:29.536 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:29.536 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:29.536 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:29.536 mke2fs 1.46.5 (30-Dec-2021) 00:06:29.536 Discarding device blocks: 0/522240 done 00:06:29.536 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:29.536 Filesystem UUID: d14f3cf8-128b-486b-9404-bd41c4c42c38 00:06:29.536 Superblock backups stored on blocks: 00:06:29.536 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:29.536 00:06:29.536 Allocating group tables: 0/64 done 00:06:29.536 Writing inode tables: 0/64 done 00:06:29.793 Creating journal (8192 blocks): done 00:06:29.793 Writing superblocks and filesystem accounting information: 0/64 done 00:06:29.793 00:06:29.793 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:29.793 09:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1786740 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:30.726 00:06:30.726 real 0m1.236s 00:06:30.726 user 0m0.010s 00:06:30.726 sys 0m0.068s 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:30.726 ************************************ 00:06:30.726 END TEST filesystem_ext4 00:06:30.726 ************************************ 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.726 ************************************ 00:06:30.726 START TEST filesystem_btrfs 00:06:30.726 ************************************ 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:30.726 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:31.292 btrfs-progs v6.6.2 00:06:31.292 See https://btrfs.readthedocs.io for more information. 00:06:31.292 00:06:31.292 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:31.292 NOTE: several default settings have changed in version 5.15, please make sure 00:06:31.292 this does not affect your deployments: 00:06:31.292 - DUP for metadata (-m dup) 00:06:31.292 - enabled no-holes (-O no-holes) 00:06:31.292 - enabled free-space-tree (-R free-space-tree) 00:06:31.292 00:06:31.292 Label: (null) 00:06:31.292 UUID: 55010591-1e30-4230-9d13-01785ec718be 00:06:31.292 Node size: 16384 00:06:31.292 Sector size: 4096 00:06:31.292 Filesystem size: 510.00MiB 00:06:31.292 Block group profiles: 00:06:31.292 Data: single 8.00MiB 00:06:31.292 Metadata: DUP 32.00MiB 00:06:31.292 System: DUP 8.00MiB 00:06:31.292 SSD detected: yes 00:06:31.292 Zoned device: no 00:06:31.292 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:31.292 Runtime features: free-space-tree 00:06:31.292 Checksum: crc32c 00:06:31.292 Number of devices: 1 00:06:31.292 Devices: 00:06:31.292 ID SIZE PATH 00:06:31.292 1 510.00MiB /dev/nvme0n1p1 00:06:31.292 00:06:31.292 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:31.292 09:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:31.549 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:31.549 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1786740 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:31.550 00:06:31.550 real 0m0.828s 00:06:31.550 user 0m0.032s 00:06:31.550 sys 0m0.100s 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:31.550 ************************************ 00:06:31.550 END TEST filesystem_btrfs 00:06:31.550 ************************************ 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.550 ************************************ 00:06:31.550 START TEST filesystem_xfs 00:06:31.550 ************************************ 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:31.550 09:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:31.808 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:31.808 = sectsz=512 attr=2, projid32bit=1 00:06:31.808 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:31.808 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:31.808 data = bsize=4096 blocks=130560, imaxpct=25 00:06:31.808 = sunit=0 swidth=0 blks 00:06:31.808 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:31.808 log =internal log bsize=4096 blocks=16384, version=2 00:06:31.808 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:31.808 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:32.373 Discarding blocks...Done. 00:06:32.373 09:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:32.373 09:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1786740 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:34.271 00:06:34.271 real 0m2.723s 00:06:34.271 user 0m0.016s 00:06:34.271 sys 0m0.059s 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:34.271 ************************************ 00:06:34.271 END TEST filesystem_xfs 00:06:34.271 ************************************ 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:34.271 09:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:34.530 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:34.530 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:34.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1786740 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1786740 ']' 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1786740 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1786740 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1786740' 00:06:34.788 killing process with pid 1786740 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1786740 00:06:34.788 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1786740 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:35.355 00:06:35.355 real 0m11.237s 00:06:35.355 user 0m42.944s 00:06:35.355 sys 0m1.837s 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.355 ************************************ 00:06:35.355 END TEST nvmf_filesystem_no_in_capsule 00:06:35.355 ************************************ 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.355 ************************************ 00:06:35.355 START TEST nvmf_filesystem_in_capsule 00:06:35.355 ************************************ 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1788183 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1788183 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1788183 ']' 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.355 09:39:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.355 [2024-07-15 09:39:51.993831] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:35.355 [2024-07-15 09:39:51.993912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.355 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.355 [2024-07-15 09:39:52.032616] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.355 [2024-07-15 09:39:52.065514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.614 [2024-07-15 09:39:52.158423] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.614 [2024-07-15 09:39:52.158481] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.614 [2024-07-15 09:39:52.158497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.614 [2024-07-15 09:39:52.158510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.614 [2024-07-15 09:39:52.158522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.614 [2024-07-15 09:39:52.158611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.614 [2024-07-15 09:39:52.158644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.614 [2024-07-15 09:39:52.158761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.614 [2024-07-15 09:39:52.158763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.614 [2024-07-15 09:39:52.314719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.614 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.872 Malloc1 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.872 [2024-07-15 09:39:52.498269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.872 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:35.872 { 00:06:35.872 "name": "Malloc1", 00:06:35.872 "aliases": [ 00:06:35.872 "d7bb1a35-ef9d-4ada-88fa-74779ed0232f" 00:06:35.872 ], 00:06:35.872 "product_name": "Malloc disk", 00:06:35.872 "block_size": 512, 00:06:35.872 "num_blocks": 1048576, 00:06:35.872 "uuid": "d7bb1a35-ef9d-4ada-88fa-74779ed0232f", 00:06:35.872 "assigned_rate_limits": { 00:06:35.872 "rw_ios_per_sec": 0, 00:06:35.872 "rw_mbytes_per_sec": 0, 00:06:35.872 "r_mbytes_per_sec": 0, 00:06:35.872 "w_mbytes_per_sec": 0 00:06:35.872 }, 00:06:35.872 "claimed": true, 00:06:35.872 "claim_type": "exclusive_write", 00:06:35.872 "zoned": false, 00:06:35.872 "supported_io_types": { 00:06:35.872 "read": true, 00:06:35.872 "write": true, 00:06:35.872 "unmap": true, 00:06:35.872 "flush": true, 00:06:35.872 "reset": true, 00:06:35.872 "nvme_admin": false, 00:06:35.872 "nvme_io": false, 00:06:35.872 "nvme_io_md": false, 00:06:35.872 "write_zeroes": true, 00:06:35.872 "zcopy": true, 00:06:35.872 "get_zone_info": false, 00:06:35.872 "zone_management": false, 00:06:35.872 "zone_append": false, 00:06:35.872 "compare": false, 00:06:35.872 "compare_and_write": false, 00:06:35.872 "abort": true, 00:06:35.872 "seek_hole": false, 00:06:35.872 "seek_data": false, 00:06:35.872 "copy": true, 00:06:35.872 "nvme_iov_md": false 00:06:35.873 }, 00:06:35.873 "memory_domains": [ 00:06:35.873 { 00:06:35.873 "dma_device_id": "system", 00:06:35.873 "dma_device_type": 1 00:06:35.873 }, 00:06:35.873 { 00:06:35.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.873 "dma_device_type": 2 00:06:35.873 } 00:06:35.873 ], 00:06:35.873 "driver_specific": {} 00:06:35.873 } 00:06:35.873 ]' 00:06:35.873 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:35.873 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:35.873 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:35.873 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:35.873 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:35.873 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:35.873 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:35.873 09:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:36.438 09:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:36.438 09:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:36.438 09:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:36.438 09:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:36.438 09:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:38.962 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:39.219 09:39:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.591 ************************************ 00:06:40.591 START TEST filesystem_in_capsule_ext4 00:06:40.591 ************************************ 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:40.591 09:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:40.591 mke2fs 1.46.5 (30-Dec-2021) 00:06:40.591 Discarding device blocks: 0/522240 done 00:06:40.591 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:40.591 Filesystem UUID: 8482ff41-b24c-46a8-b0f7-f52a60c94556 00:06:40.591 Superblock backups stored on blocks: 00:06:40.591 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:40.591 00:06:40.591 Allocating group tables: 0/64 done 00:06:40.591 Writing inode tables: 0/64 done 00:06:40.591 Creating journal (8192 blocks): done 00:06:40.849 Writing superblocks and filesystem accounting information: 0/64 done 00:06:40.849 00:06:40.849 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:40.849 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.107 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.107 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:41.107 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.107 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:41.107 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:41.107 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.364 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1788183 00:06:41.364 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.364 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.365 00:06:41.365 real 0m0.950s 00:06:41.365 user 0m0.016s 00:06:41.365 sys 0m0.054s 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:41.365 ************************************ 00:06:41.365 END TEST filesystem_in_capsule_ext4 00:06:41.365 ************************************ 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.365 ************************************ 00:06:41.365 START TEST filesystem_in_capsule_btrfs 00:06:41.365 ************************************ 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:41.365 09:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:41.623 btrfs-progs v6.6.2 00:06:41.623 See https://btrfs.readthedocs.io for more information. 00:06:41.623 00:06:41.623 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:41.623 NOTE: several default settings have changed in version 5.15, please make sure 00:06:41.623 this does not affect your deployments: 00:06:41.623 - DUP for metadata (-m dup) 00:06:41.623 - enabled no-holes (-O no-holes) 00:06:41.623 - enabled free-space-tree (-R free-space-tree) 00:06:41.623 00:06:41.623 Label: (null) 00:06:41.623 UUID: 6b74065b-e459-4886-b8dc-9ef3a1ffeb62 00:06:41.623 Node size: 16384 00:06:41.623 Sector size: 4096 00:06:41.623 Filesystem size: 510.00MiB 00:06:41.623 Block group profiles: 00:06:41.623 Data: single 8.00MiB 00:06:41.623 Metadata: DUP 32.00MiB 00:06:41.623 System: DUP 8.00MiB 00:06:41.623 SSD detected: yes 00:06:41.623 Zoned device: no 00:06:41.623 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:41.623 Runtime features: free-space-tree 00:06:41.623 Checksum: crc32c 00:06:41.623 Number of devices: 1 00:06:41.623 Devices: 00:06:41.623 ID SIZE PATH 00:06:41.623 1 510.00MiB /dev/nvme0n1p1 00:06:41.623 00:06:41.623 09:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:41.623 09:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1788183 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.556 00:06:42.556 real 0m1.273s 00:06:42.556 user 0m0.026s 00:06:42.556 sys 0m0.111s 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:42.556 ************************************ 00:06:42.556 END TEST filesystem_in_capsule_btrfs 00:06:42.556 ************************************ 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.556 ************************************ 00:06:42.556 START TEST filesystem_in_capsule_xfs 00:06:42.556 ************************************ 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:42.556 09:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:42.814 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:42.814 = sectsz=512 attr=2, projid32bit=1 00:06:42.814 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:42.814 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:42.814 data = bsize=4096 blocks=130560, imaxpct=25 00:06:42.814 = sunit=0 swidth=0 blks 00:06:42.814 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:42.814 log =internal log bsize=4096 blocks=16384, version=2 00:06:42.814 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:42.814 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:43.745 Discarding blocks...Done. 00:06:43.745 09:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:43.745 09:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:46.318 09:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:46.318 09:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:46.318 09:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:46.318 09:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:46.318 09:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:46.318 09:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:46.318 09:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1788183 00:06:46.318 09:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:46.318 09:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:46.318 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:46.318 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:46.318 00:06:46.318 real 0m3.715s 00:06:46.318 user 0m0.012s 00:06:46.318 sys 0m0.066s 00:06:46.318 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.318 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:46.318 ************************************ 00:06:46.318 END TEST filesystem_in_capsule_xfs 00:06:46.318 ************************************ 00:06:46.318 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:46.318 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:46.585 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:46.585 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:46.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1788183 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1788183 ']' 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1788183 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1788183 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1788183' 00:06:46.844 killing process with pid 1788183 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1788183 00:06:46.844 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1788183 00:06:47.102 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:47.102 00:06:47.102 real 0m11.940s 00:06:47.102 user 0m45.818s 00:06:47.102 sys 0m1.746s 00:06:47.102 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.102 09:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.102 ************************************ 00:06:47.102 END TEST nvmf_filesystem_in_capsule 00:06:47.102 ************************************ 00:06:47.361 09:40:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:47.361 09:40:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:47.361 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:47.361 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:47.361 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:47.362 rmmod nvme_tcp 00:06:47.362 rmmod nvme_fabrics 00:06:47.362 rmmod nvme_keyring 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.362 09:40:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.259 09:40:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:49.259 00:06:49.259 real 0m27.780s 00:06:49.259 user 1m29.719s 00:06:49.259 sys 0m5.216s 00:06:49.259 09:40:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.259 09:40:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.259 ************************************ 00:06:49.259 END TEST nvmf_filesystem 00:06:49.259 ************************************ 00:06:49.259 09:40:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:49.259 09:40:06 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:49.259 09:40:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:49.259 09:40:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.259 09:40:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.518 ************************************ 00:06:49.518 START TEST nvmf_target_discovery 00:06:49.518 ************************************ 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:49.518 * Looking for test storage... 00:06:49.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:49.518 09:40:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:51.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:51.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:51.414 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.414 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:51.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:51.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:06:51.415 00:06:51.415 --- 10.0.0.2 ping statistics --- 00:06:51.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.415 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:06:51.415 00:06:51.415 --- 10.0.0.1 ping statistics --- 00:06:51.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.415 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1791786 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1791786 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1791786 ']' 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.415 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.674 [2024-07-15 09:40:08.238337] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:51.674 [2024-07-15 09:40:08.238419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.674 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.674 [2024-07-15 09:40:08.275946] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.674 [2024-07-15 09:40:08.305298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.674 [2024-07-15 09:40:08.397351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.674 [2024-07-15 09:40:08.397412] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.674 [2024-07-15 09:40:08.397428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.674 [2024-07-15 09:40:08.397441] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.674 [2024-07-15 09:40:08.397453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.674 [2024-07-15 09:40:08.397537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.674 [2024-07-15 09:40:08.397584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.674 [2024-07-15 09:40:08.397674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.674 [2024-07-15 09:40:08.397677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.933 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.933 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:51.933 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:51.933 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:51.933 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.933 09:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 [2024-07-15 09:40:08.550806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 Null1 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 [2024-07-15 09:40:08.591132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 Null2 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 Null3 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 Null4 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.934 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:52.193 00:06:52.193 Discovery Log Number of Records 6, Generation counter 6 00:06:52.193 =====Discovery Log Entry 0====== 00:06:52.193 trtype: tcp 00:06:52.193 adrfam: ipv4 00:06:52.193 subtype: current discovery subsystem 00:06:52.193 treq: not required 00:06:52.193 portid: 0 00:06:52.193 trsvcid: 4420 00:06:52.193 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:52.193 traddr: 10.0.0.2 00:06:52.193 eflags: explicit discovery connections, duplicate discovery information 00:06:52.193 sectype: none 00:06:52.193 =====Discovery Log Entry 1====== 00:06:52.193 trtype: tcp 00:06:52.193 adrfam: ipv4 00:06:52.193 subtype: nvme subsystem 00:06:52.193 treq: not required 00:06:52.193 portid: 0 00:06:52.193 trsvcid: 4420 00:06:52.193 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:52.193 traddr: 10.0.0.2 00:06:52.193 eflags: none 00:06:52.193 sectype: none 00:06:52.193 =====Discovery Log Entry 2====== 00:06:52.193 trtype: tcp 00:06:52.193 adrfam: ipv4 00:06:52.193 subtype: nvme subsystem 00:06:52.193 treq: not required 00:06:52.193 portid: 0 00:06:52.193 trsvcid: 4420 00:06:52.193 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:52.193 traddr: 10.0.0.2 00:06:52.193 eflags: none 00:06:52.193 sectype: none 00:06:52.193 =====Discovery Log Entry 3====== 00:06:52.193 trtype: tcp 00:06:52.193 adrfam: ipv4 00:06:52.193 subtype: nvme subsystem 00:06:52.193 treq: not required 00:06:52.193 portid: 0 00:06:52.193 trsvcid: 4420 00:06:52.193 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:52.193 traddr: 10.0.0.2 00:06:52.193 eflags: none 00:06:52.193 sectype: none 00:06:52.193 =====Discovery Log Entry 4====== 00:06:52.193 trtype: tcp 00:06:52.193 adrfam: ipv4 00:06:52.193 subtype: nvme subsystem 00:06:52.193 treq: not required 00:06:52.193 portid: 0 00:06:52.193 trsvcid: 4420 00:06:52.193 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:52.193 traddr: 10.0.0.2 00:06:52.193 eflags: none 00:06:52.193 sectype: none 00:06:52.193 =====Discovery Log Entry 5====== 00:06:52.193 trtype: tcp 00:06:52.193 adrfam: ipv4 00:06:52.193 subtype: discovery subsystem referral 00:06:52.193 treq: not required 00:06:52.193 portid: 0 00:06:52.193 trsvcid: 4430 00:06:52.193 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:52.193 traddr: 10.0.0.2 00:06:52.193 eflags: none 00:06:52.193 sectype: none 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:52.193 Perform nvmf subsystem discovery via RPC 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.193 [ 00:06:52.193 { 00:06:52.193 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:52.193 "subtype": "Discovery", 00:06:52.193 "listen_addresses": [ 00:06:52.193 { 00:06:52.193 "trtype": "TCP", 00:06:52.193 "adrfam": "IPv4", 00:06:52.193 "traddr": "10.0.0.2", 00:06:52.193 "trsvcid": "4420" 00:06:52.193 } 00:06:52.193 ], 00:06:52.193 "allow_any_host": true, 00:06:52.193 "hosts": [] 00:06:52.193 }, 00:06:52.193 { 00:06:52.193 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:52.193 "subtype": "NVMe", 00:06:52.193 "listen_addresses": [ 00:06:52.193 { 00:06:52.193 "trtype": "TCP", 00:06:52.193 "adrfam": "IPv4", 00:06:52.193 "traddr": "10.0.0.2", 00:06:52.193 "trsvcid": "4420" 00:06:52.193 } 00:06:52.193 ], 00:06:52.193 "allow_any_host": true, 00:06:52.193 "hosts": [], 00:06:52.193 "serial_number": "SPDK00000000000001", 00:06:52.193 "model_number": "SPDK bdev Controller", 00:06:52.193 "max_namespaces": 32, 00:06:52.193 "min_cntlid": 1, 00:06:52.193 "max_cntlid": 65519, 00:06:52.193 "namespaces": [ 00:06:52.193 { 00:06:52.193 "nsid": 1, 00:06:52.193 "bdev_name": "Null1", 00:06:52.193 "name": "Null1", 00:06:52.193 "nguid": "8C8ABB062EF1490A8AEBEEF7EDD7A880", 00:06:52.193 "uuid": "8c8abb06-2ef1-490a-8aeb-eef7edd7a880" 00:06:52.193 } 00:06:52.193 ] 00:06:52.193 }, 00:06:52.193 { 00:06:52.193 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:52.193 "subtype": "NVMe", 00:06:52.193 "listen_addresses": [ 00:06:52.193 { 00:06:52.193 "trtype": "TCP", 00:06:52.193 "adrfam": "IPv4", 00:06:52.193 "traddr": "10.0.0.2", 00:06:52.193 "trsvcid": "4420" 00:06:52.193 } 00:06:52.193 ], 00:06:52.193 "allow_any_host": true, 00:06:52.193 "hosts": [], 00:06:52.193 "serial_number": "SPDK00000000000002", 00:06:52.193 "model_number": "SPDK bdev Controller", 00:06:52.193 "max_namespaces": 32, 00:06:52.193 "min_cntlid": 1, 00:06:52.193 "max_cntlid": 65519, 00:06:52.193 "namespaces": [ 00:06:52.193 { 00:06:52.193 "nsid": 1, 00:06:52.193 "bdev_name": "Null2", 00:06:52.193 "name": "Null2", 00:06:52.193 "nguid": "5138018A27E44E0EA184CB5031394D6E", 00:06:52.193 "uuid": "5138018a-27e4-4e0e-a184-cb5031394d6e" 00:06:52.193 } 00:06:52.193 ] 00:06:52.193 }, 00:06:52.193 { 00:06:52.193 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:52.193 "subtype": "NVMe", 00:06:52.193 "listen_addresses": [ 00:06:52.193 { 00:06:52.193 "trtype": "TCP", 00:06:52.193 "adrfam": "IPv4", 00:06:52.193 "traddr": "10.0.0.2", 00:06:52.193 "trsvcid": "4420" 00:06:52.193 } 00:06:52.193 ], 00:06:52.193 "allow_any_host": true, 00:06:52.193 "hosts": [], 00:06:52.193 "serial_number": "SPDK00000000000003", 00:06:52.193 "model_number": "SPDK bdev Controller", 00:06:52.193 "max_namespaces": 32, 00:06:52.193 "min_cntlid": 1, 00:06:52.193 "max_cntlid": 65519, 00:06:52.193 "namespaces": [ 00:06:52.193 { 00:06:52.193 "nsid": 1, 00:06:52.193 "bdev_name": "Null3", 00:06:52.193 "name": "Null3", 00:06:52.193 "nguid": "9F40A9956C2D4C35ADAE660CA4A12FC1", 00:06:52.193 "uuid": "9f40a995-6c2d-4c35-adae-660ca4a12fc1" 00:06:52.193 } 00:06:52.193 ] 00:06:52.193 }, 00:06:52.193 { 00:06:52.193 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:52.193 "subtype": "NVMe", 00:06:52.193 "listen_addresses": [ 00:06:52.193 { 00:06:52.193 "trtype": "TCP", 00:06:52.193 "adrfam": "IPv4", 00:06:52.193 "traddr": "10.0.0.2", 00:06:52.193 "trsvcid": "4420" 00:06:52.193 } 00:06:52.193 ], 00:06:52.193 "allow_any_host": true, 00:06:52.193 "hosts": [], 00:06:52.193 "serial_number": "SPDK00000000000004", 00:06:52.193 "model_number": "SPDK bdev Controller", 00:06:52.193 "max_namespaces": 32, 00:06:52.193 "min_cntlid": 1, 00:06:52.193 "max_cntlid": 65519, 00:06:52.193 "namespaces": [ 00:06:52.193 { 00:06:52.193 "nsid": 1, 00:06:52.193 "bdev_name": "Null4", 00:06:52.193 "name": "Null4", 00:06:52.193 "nguid": "293C07B035444E2A91875C7B3911203F", 00:06:52.193 "uuid": "293c07b0-3544-4e2a-9187-5c7b3911203f" 00:06:52.193 } 00:06:52.193 ] 00:06:52.193 } 00:06:52.193 ] 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.193 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.194 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.194 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.194 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:52.194 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.194 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.451 09:40:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:52.451 rmmod nvme_tcp 00:06:52.451 rmmod nvme_fabrics 00:06:52.451 rmmod nvme_keyring 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1791786 ']' 00:06:52.451 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1791786 00:06:52.452 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1791786 ']' 00:06:52.452 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1791786 00:06:52.452 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:52.452 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.452 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1791786 00:06:52.452 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.452 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.452 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1791786' 00:06:52.452 killing process with pid 1791786 00:06:52.452 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1791786 00:06:52.452 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1791786 00:06:52.708 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:52.708 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:52.708 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:52.708 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:52.708 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:52.709 09:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.709 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.709 09:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.607 09:40:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:54.607 00:06:54.607 real 0m5.323s 00:06:54.607 user 0m4.455s 00:06:54.607 sys 0m1.762s 00:06:54.608 09:40:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.608 09:40:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:54.608 ************************************ 00:06:54.608 END TEST nvmf_target_discovery 00:06:54.608 ************************************ 00:06:54.865 09:40:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:54.866 09:40:11 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:54.866 09:40:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:54.866 09:40:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.866 09:40:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.866 ************************************ 00:06:54.866 START TEST nvmf_referrals 00:06:54.866 ************************************ 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:54.866 * Looking for test storage... 00:06:54.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:54.866 09:40:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:57.394 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:57.394 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:57.394 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:57.394 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:57.394 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:57.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:06:57.395 00:06:57.395 --- 10.0.0.2 ping statistics --- 00:06:57.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.395 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:57.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:06:57.395 00:06:57.395 --- 10.0.0.1 ping statistics --- 00:06:57.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.395 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1793878 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1793878 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1793878 ']' 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.395 09:40:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.395 [2024-07-15 09:40:13.807391] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:57.395 [2024-07-15 09:40:13.807474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.395 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.395 [2024-07-15 09:40:13.844767] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.395 [2024-07-15 09:40:13.871019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.395 [2024-07-15 09:40:13.956996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:57.395 [2024-07-15 09:40:13.957057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:57.395 [2024-07-15 09:40:13.957084] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:57.395 [2024-07-15 09:40:13.957096] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:57.395 [2024-07-15 09:40:13.957106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:57.395 [2024-07-15 09:40:13.957156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.395 [2024-07-15 09:40:13.957184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.395 [2024-07-15 09:40:13.957242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.395 [2024-07-15 09:40:13.957245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.395 [2024-07-15 09:40:14.108707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.395 [2024-07-15 09:40:14.120972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.395 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:57.652 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.909 09:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.167 09:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:58.425 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.683 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:58.941 rmmod nvme_tcp 00:06:58.941 rmmod nvme_fabrics 00:06:58.941 rmmod nvme_keyring 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1793878 ']' 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1793878 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1793878 ']' 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1793878 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.941 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1793878 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1793878' 00:06:59.200 killing process with pid 1793878 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1793878 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1793878 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.200 09:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.733 09:40:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:01.733 00:07:01.733 real 0m6.575s 00:07:01.733 user 0m9.389s 00:07:01.733 sys 0m2.166s 00:07:01.733 09:40:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.733 09:40:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.733 ************************************ 00:07:01.733 END TEST nvmf_referrals 00:07:01.733 ************************************ 00:07:01.733 09:40:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:01.733 09:40:18 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:01.733 09:40:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.733 09:40:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.733 09:40:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.733 ************************************ 00:07:01.733 START TEST nvmf_connect_disconnect 00:07:01.733 ************************************ 00:07:01.733 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:01.733 * Looking for test storage... 00:07:01.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.733 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.733 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:01.733 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.734 09:40:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:03.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:03.670 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:03.670 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:03.671 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:03.671 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:03.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:07:03.671 00:07:03.671 --- 10.0.0.2 ping statistics --- 00:07:03.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.671 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:03.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:07:03.671 00:07:03.671 --- 10.0.0.1 ping statistics --- 00:07:03.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.671 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1796054 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1796054 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1796054 ']' 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.671 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:03.671 [2024-07-15 09:40:20.265589] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:03.671 [2024-07-15 09:40:20.265677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.671 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.671 [2024-07-15 09:40:20.303269] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.671 [2024-07-15 09:40:20.335238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.671 [2024-07-15 09:40:20.425832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.671 [2024-07-15 09:40:20.425910] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.671 [2024-07-15 09:40:20.425936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.671 [2024-07-15 09:40:20.425949] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.671 [2024-07-15 09:40:20.425960] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.671 [2024-07-15 09:40:20.426056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.671 [2024-07-15 09:40:20.426094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.671 [2024-07-15 09:40:20.426221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.671 [2024-07-15 09:40:20.426223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:03.931 [2024-07-15 09:40:20.582654] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.931 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:03.932 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.932 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.932 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.932 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:03.932 [2024-07-15 09:40:20.639532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.932 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.932 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:03.932 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:03.932 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:03.932 09:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:06.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:09.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.926 rmmod nvme_tcp 00:10:55.926 rmmod nvme_fabrics 00:10:55.926 rmmod nvme_keyring 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1796054 ']' 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1796054 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1796054 ']' 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1796054 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1796054 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1796054' 00:10:55.926 killing process with pid 1796054 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1796054 00:10:55.926 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1796054 00:10:55.927 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:55.927 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:55.927 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:55.927 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.927 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.927 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.927 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.927 09:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.874 09:44:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:57.874 00:10:57.874 real 3m56.593s 00:10:57.874 user 15m1.384s 00:10:57.874 sys 0m34.549s 00:10:57.874 09:44:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.874 09:44:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.874 ************************************ 00:10:57.874 END TEST nvmf_connect_disconnect 00:10:57.874 ************************************ 00:10:58.132 09:44:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:58.132 09:44:14 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:58.132 09:44:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:58.132 09:44:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.132 09:44:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.132 ************************************ 00:10:58.132 START TEST nvmf_multitarget 00:10:58.132 ************************************ 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:58.132 * Looking for test storage... 00:10:58.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.132 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.133 09:44:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.657 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:00.658 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:00.658 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:00.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:00.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:11:00.658 00:11:00.658 --- 10.0.0.2 ping statistics --- 00:11:00.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.658 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:11:00.658 00:11:00.658 --- 10.0.0.1 ping statistics --- 00:11:00.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.658 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1827163 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1827163 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1827163 ']' 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.658 09:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:00.658 [2024-07-15 09:44:17.021845] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:00.658 [2024-07-15 09:44:17.021939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.658 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.658 [2024-07-15 09:44:17.059040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:00.658 [2024-07-15 09:44:17.086543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.658 [2024-07-15 09:44:17.174614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.658 [2024-07-15 09:44:17.174684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.658 [2024-07-15 09:44:17.174697] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.658 [2024-07-15 09:44:17.174707] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.658 [2024-07-15 09:44:17.174716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.658 [2024-07-15 09:44:17.174866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.658 [2024-07-15 09:44:17.174940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.658 [2024-07-15 09:44:17.174983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.658 [2024-07-15 09:44:17.174986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.658 09:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.658 09:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:11:00.658 09:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:00.658 09:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:00.658 09:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:00.658 09:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.658 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:00.658 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:00.659 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:00.659 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:00.659 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:00.916 "nvmf_tgt_1" 00:11:00.916 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:00.916 "nvmf_tgt_2" 00:11:00.916 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:00.916 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:01.173 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:01.173 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:01.173 true 00:11:01.173 09:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:01.431 true 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:01.431 rmmod nvme_tcp 00:11:01.431 rmmod nvme_fabrics 00:11:01.431 rmmod nvme_keyring 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1827163 ']' 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1827163 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1827163 ']' 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1827163 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1827163 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1827163' 00:11:01.431 killing process with pid 1827163 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1827163 00:11:01.431 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1827163 00:11:01.690 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:01.690 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:01.690 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:01.690 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:01.690 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:01.690 09:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.690 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.690 09:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.225 09:44:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:04.225 00:11:04.225 real 0m5.748s 00:11:04.225 user 0m6.432s 00:11:04.225 sys 0m1.919s 00:11:04.225 09:44:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.225 09:44:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:04.225 ************************************ 00:11:04.225 END TEST nvmf_multitarget 00:11:04.225 ************************************ 00:11:04.225 09:44:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:04.225 09:44:20 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:04.225 09:44:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:04.225 09:44:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.225 09:44:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:04.225 ************************************ 00:11:04.225 START TEST nvmf_rpc 00:11:04.225 ************************************ 00:11:04.225 09:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:04.225 * Looking for test storage... 00:11:04.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.225 09:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.225 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:04.225 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.225 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.225 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.225 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.225 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:04.226 09:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:06.131 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:06.131 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:06.131 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:06.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:06.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:06.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:11:06.132 00:11:06.132 --- 10.0.0.2 ping statistics --- 00:11:06.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.132 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:11:06.132 00:11:06.132 --- 10.0.0.1 ping statistics --- 00:11:06.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.132 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1829265 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1829265 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1829265 ']' 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.132 09:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.132 [2024-07-15 09:44:22.816956] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:06.132 [2024-07-15 09:44:22.817044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.132 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.132 [2024-07-15 09:44:22.856322] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:06.132 [2024-07-15 09:44:22.888500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.391 [2024-07-15 09:44:22.982623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.391 [2024-07-15 09:44:22.982696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.391 [2024-07-15 09:44:22.982713] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.391 [2024-07-15 09:44:22.982727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.391 [2024-07-15 09:44:22.982738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.391 [2024-07-15 09:44:22.982829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.391 [2024-07-15 09:44:22.982905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.391 [2024-07-15 09:44:22.982935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.391 [2024-07-15 09:44:22.982938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:06.391 "tick_rate": 2700000000, 00:11:06.391 "poll_groups": [ 00:11:06.391 { 00:11:06.391 "name": "nvmf_tgt_poll_group_000", 00:11:06.391 "admin_qpairs": 0, 00:11:06.391 "io_qpairs": 0, 00:11:06.391 "current_admin_qpairs": 0, 00:11:06.391 "current_io_qpairs": 0, 00:11:06.391 "pending_bdev_io": 0, 00:11:06.391 "completed_nvme_io": 0, 00:11:06.391 "transports": [] 00:11:06.391 }, 00:11:06.391 { 00:11:06.391 "name": "nvmf_tgt_poll_group_001", 00:11:06.391 "admin_qpairs": 0, 00:11:06.391 "io_qpairs": 0, 00:11:06.391 "current_admin_qpairs": 0, 00:11:06.391 "current_io_qpairs": 0, 00:11:06.391 "pending_bdev_io": 0, 00:11:06.391 "completed_nvme_io": 0, 00:11:06.391 "transports": [] 00:11:06.391 }, 00:11:06.391 { 00:11:06.391 "name": "nvmf_tgt_poll_group_002", 00:11:06.391 "admin_qpairs": 0, 00:11:06.391 "io_qpairs": 0, 00:11:06.391 "current_admin_qpairs": 0, 00:11:06.391 "current_io_qpairs": 0, 00:11:06.391 "pending_bdev_io": 0, 00:11:06.391 "completed_nvme_io": 0, 00:11:06.391 "transports": [] 00:11:06.391 }, 00:11:06.391 { 00:11:06.391 "name": "nvmf_tgt_poll_group_003", 00:11:06.391 "admin_qpairs": 0, 00:11:06.391 "io_qpairs": 0, 00:11:06.391 "current_admin_qpairs": 0, 00:11:06.391 "current_io_qpairs": 0, 00:11:06.391 "pending_bdev_io": 0, 00:11:06.391 "completed_nvme_io": 0, 00:11:06.391 "transports": [] 00:11:06.391 } 00:11:06.391 ] 00:11:06.391 }' 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:06.391 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.651 [2024-07-15 09:44:23.244271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:06.651 "tick_rate": 2700000000, 00:11:06.651 "poll_groups": [ 00:11:06.651 { 00:11:06.651 "name": "nvmf_tgt_poll_group_000", 00:11:06.651 "admin_qpairs": 0, 00:11:06.651 "io_qpairs": 0, 00:11:06.651 "current_admin_qpairs": 0, 00:11:06.651 "current_io_qpairs": 0, 00:11:06.651 "pending_bdev_io": 0, 00:11:06.651 "completed_nvme_io": 0, 00:11:06.651 "transports": [ 00:11:06.651 { 00:11:06.651 "trtype": "TCP" 00:11:06.651 } 00:11:06.651 ] 00:11:06.651 }, 00:11:06.651 { 00:11:06.651 "name": "nvmf_tgt_poll_group_001", 00:11:06.651 "admin_qpairs": 0, 00:11:06.651 "io_qpairs": 0, 00:11:06.651 "current_admin_qpairs": 0, 00:11:06.651 "current_io_qpairs": 0, 00:11:06.651 "pending_bdev_io": 0, 00:11:06.651 "completed_nvme_io": 0, 00:11:06.651 "transports": [ 00:11:06.651 { 00:11:06.651 "trtype": "TCP" 00:11:06.651 } 00:11:06.651 ] 00:11:06.651 }, 00:11:06.651 { 00:11:06.651 "name": "nvmf_tgt_poll_group_002", 00:11:06.651 "admin_qpairs": 0, 00:11:06.651 "io_qpairs": 0, 00:11:06.651 "current_admin_qpairs": 0, 00:11:06.651 "current_io_qpairs": 0, 00:11:06.651 "pending_bdev_io": 0, 00:11:06.651 "completed_nvme_io": 0, 00:11:06.651 "transports": [ 00:11:06.651 { 00:11:06.651 "trtype": "TCP" 00:11:06.651 } 00:11:06.651 ] 00:11:06.651 }, 00:11:06.651 { 00:11:06.651 "name": "nvmf_tgt_poll_group_003", 00:11:06.651 "admin_qpairs": 0, 00:11:06.651 "io_qpairs": 0, 00:11:06.651 "current_admin_qpairs": 0, 00:11:06.651 "current_io_qpairs": 0, 00:11:06.651 "pending_bdev_io": 0, 00:11:06.651 "completed_nvme_io": 0, 00:11:06.651 "transports": [ 00:11:06.651 { 00:11:06.651 "trtype": "TCP" 00:11:06.651 } 00:11:06.651 ] 00:11:06.651 } 00:11:06.651 ] 00:11:06.651 }' 00:11:06.651 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.652 Malloc1 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.652 [2024-07-15 09:44:23.401601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:06.652 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:06.652 [2024-07-15 09:44:23.424111] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:06.910 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:06.910 could not add new controller: failed to write to nvme-fabrics device 00:11:06.910 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:06.910 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:06.910 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:06.910 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:06.910 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.910 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.910 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.910 09:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.910 09:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.474 09:44:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.474 09:44:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:07.474 09:44:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.474 09:44:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:07.474 09:44:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:09.372 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:09.372 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:09.372 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.372 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:09.372 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.372 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:09.372 09:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.630 [2024-07-15 09:44:26.213670] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:09.630 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:09.630 could not add new controller: failed to write to nvme-fabrics device 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.630 09:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.196 09:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.196 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:10.196 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.196 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:10.196 09:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:12.721 09:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.721 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.722 [2024-07-15 09:44:29.036192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.722 09:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.980 09:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.980 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:12.980 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.980 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:12.980 09:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:15.540 09:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.541 [2024-07-15 09:44:31.850661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.541 09:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.798 09:44:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.798 09:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:15.798 09:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.798 09:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:15.798 09:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:17.694 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:17.694 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:17.694 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.952 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.953 [2024-07-15 09:44:34.622475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.953 09:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.884 09:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.884 09:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:18.884 09:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.884 09:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:18.884 09:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.782 [2024-07-15 09:44:37.483512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.782 09:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.719 09:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.719 09:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:21.719 09:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.719 09:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:21.719 09:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.616 [2024-07-15 09:44:40.286167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.616 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.617 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.617 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:23.617 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.617 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.617 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.617 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.181 09:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.181 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:24.181 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.181 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:24.181 09:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:26.706 09:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:26.706 09:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:26.706 09:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.706 09:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:26.706 09:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.706 09:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:26.706 09:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 [2024-07-15 09:44:43.065749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 [2024-07-15 09:44:43.113821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 [2024-07-15 09:44:43.162025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 [2024-07-15 09:44:43.210235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.707 [2024-07-15 09:44:43.258360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:26.707 "tick_rate": 2700000000, 00:11:26.707 "poll_groups": [ 00:11:26.707 { 00:11:26.707 "name": "nvmf_tgt_poll_group_000", 00:11:26.707 "admin_qpairs": 2, 00:11:26.707 "io_qpairs": 84, 00:11:26.707 "current_admin_qpairs": 0, 00:11:26.707 "current_io_qpairs": 0, 00:11:26.707 "pending_bdev_io": 0, 00:11:26.707 "completed_nvme_io": 136, 00:11:26.707 "transports": [ 00:11:26.707 { 00:11:26.707 "trtype": "TCP" 00:11:26.707 } 00:11:26.707 ] 00:11:26.707 }, 00:11:26.707 { 00:11:26.707 "name": "nvmf_tgt_poll_group_001", 00:11:26.707 "admin_qpairs": 2, 00:11:26.707 "io_qpairs": 84, 00:11:26.707 "current_admin_qpairs": 0, 00:11:26.707 "current_io_qpairs": 0, 00:11:26.707 "pending_bdev_io": 0, 00:11:26.707 "completed_nvme_io": 232, 00:11:26.707 "transports": [ 00:11:26.707 { 00:11:26.707 "trtype": "TCP" 00:11:26.707 } 00:11:26.707 ] 00:11:26.707 }, 00:11:26.707 { 00:11:26.707 "name": "nvmf_tgt_poll_group_002", 00:11:26.707 "admin_qpairs": 1, 00:11:26.707 "io_qpairs": 84, 00:11:26.707 "current_admin_qpairs": 0, 00:11:26.707 "current_io_qpairs": 0, 00:11:26.707 "pending_bdev_io": 0, 00:11:26.707 "completed_nvme_io": 136, 00:11:26.707 "transports": [ 00:11:26.707 { 00:11:26.707 "trtype": "TCP" 00:11:26.707 } 00:11:26.707 ] 00:11:26.707 }, 00:11:26.707 { 00:11:26.707 "name": "nvmf_tgt_poll_group_003", 00:11:26.707 "admin_qpairs": 2, 00:11:26.707 "io_qpairs": 84, 00:11:26.707 "current_admin_qpairs": 0, 00:11:26.707 "current_io_qpairs": 0, 00:11:26.707 "pending_bdev_io": 0, 00:11:26.707 "completed_nvme_io": 182, 00:11:26.707 "transports": [ 00:11:26.707 { 00:11:26.707 "trtype": "TCP" 00:11:26.707 } 00:11:26.707 ] 00:11:26.707 } 00:11:26.707 ] 00:11:26.707 }' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.707 rmmod nvme_tcp 00:11:26.707 rmmod nvme_fabrics 00:11:26.707 rmmod nvme_keyring 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1829265 ']' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1829265 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1829265 ']' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1829265 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:26.707 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1829265 00:11:26.965 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:26.965 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:26.965 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1829265' 00:11:26.965 killing process with pid 1829265 00:11:26.965 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1829265 00:11:26.965 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1829265 00:11:27.224 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.224 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.224 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.224 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.225 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.225 09:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.225 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.225 09:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.152 09:44:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.152 00:11:29.152 real 0m25.299s 00:11:29.152 user 1m22.219s 00:11:29.152 sys 0m4.081s 00:11:29.152 09:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.152 09:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.152 ************************************ 00:11:29.152 END TEST nvmf_rpc 00:11:29.152 ************************************ 00:11:29.152 09:44:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:29.152 09:44:45 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:29.152 09:44:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.152 09:44:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.152 09:44:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.152 ************************************ 00:11:29.152 START TEST nvmf_invalid 00:11:29.152 ************************************ 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:29.152 * Looking for test storage... 00:11:29.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.152 09:44:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.679 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:31.680 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:31.680 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:31.680 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:31.680 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.680 09:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:11:31.680 00:11:31.680 --- 10.0.0.2 ping statistics --- 00:11:31.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.680 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:11:31.680 00:11:31.680 --- 10.0.0.1 ping statistics --- 00:11:31.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.680 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1833862 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1833862 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1833862 ']' 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.680 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:31.680 [2024-07-15 09:44:48.104977] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:31.680 [2024-07-15 09:44:48.105084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.680 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.680 [2024-07-15 09:44:48.142980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:31.680 [2024-07-15 09:44:48.174923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.680 [2024-07-15 09:44:48.268773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.680 [2024-07-15 09:44:48.268836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.680 [2024-07-15 09:44:48.268860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.680 [2024-07-15 09:44:48.268882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.680 [2024-07-15 09:44:48.268896] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.680 [2024-07-15 09:44:48.268955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.680 [2024-07-15 09:44:48.269013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.680 [2024-07-15 09:44:48.269065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.680 [2024-07-15 09:44:48.269068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.681 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.681 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:11:31.681 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.681 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:31.681 09:44:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:31.681 09:44:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.681 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:31.681 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2261 00:11:31.937 [2024-07-15 09:44:48.642415] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:31.937 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:31.937 { 00:11:31.937 "nqn": "nqn.2016-06.io.spdk:cnode2261", 00:11:31.937 "tgt_name": "foobar", 00:11:31.937 "method": "nvmf_create_subsystem", 00:11:31.937 "req_id": 1 00:11:31.937 } 00:11:31.937 Got JSON-RPC error response 00:11:31.937 response: 00:11:31.937 { 00:11:31.937 "code": -32603, 00:11:31.937 "message": "Unable to find target foobar" 00:11:31.937 }' 00:11:31.937 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:31.937 { 00:11:31.937 "nqn": "nqn.2016-06.io.spdk:cnode2261", 00:11:31.937 "tgt_name": "foobar", 00:11:31.937 "method": "nvmf_create_subsystem", 00:11:31.937 "req_id": 1 00:11:31.937 } 00:11:31.937 Got JSON-RPC error response 00:11:31.937 response: 00:11:31.937 { 00:11:31.937 "code": -32603, 00:11:31.937 "message": "Unable to find target foobar" 00:11:31.937 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:31.937 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:31.937 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32099 00:11:32.194 [2024-07-15 09:44:48.887266] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32099: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:32.194 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:32.194 { 00:11:32.194 "nqn": "nqn.2016-06.io.spdk:cnode32099", 00:11:32.194 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:32.194 "method": "nvmf_create_subsystem", 00:11:32.194 "req_id": 1 00:11:32.194 } 00:11:32.194 Got JSON-RPC error response 00:11:32.194 response: 00:11:32.194 { 00:11:32.194 "code": -32602, 00:11:32.194 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:32.194 }' 00:11:32.194 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:32.194 { 00:11:32.194 "nqn": "nqn.2016-06.io.spdk:cnode32099", 00:11:32.194 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:32.194 "method": "nvmf_create_subsystem", 00:11:32.195 "req_id": 1 00:11:32.195 } 00:11:32.195 Got JSON-RPC error response 00:11:32.195 response: 00:11:32.195 { 00:11:32.195 "code": -32602, 00:11:32.195 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:32.195 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:32.195 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:32.195 09:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12002 00:11:32.453 [2024-07-15 09:44:49.148108] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12002: invalid model number 'SPDK_Controller' 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:32.453 { 00:11:32.453 "nqn": "nqn.2016-06.io.spdk:cnode12002", 00:11:32.453 "model_number": "SPDK_Controller\u001f", 00:11:32.453 "method": "nvmf_create_subsystem", 00:11:32.453 "req_id": 1 00:11:32.453 } 00:11:32.453 Got JSON-RPC error response 00:11:32.453 response: 00:11:32.453 { 00:11:32.453 "code": -32602, 00:11:32.453 "message": "Invalid MN SPDK_Controller\u001f" 00:11:32.453 }' 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:32.453 { 00:11:32.453 "nqn": "nqn.2016-06.io.spdk:cnode12002", 00:11:32.453 "model_number": "SPDK_Controller\u001f", 00:11:32.453 "method": "nvmf_create_subsystem", 00:11:32.453 "req_id": 1 00:11:32.453 } 00:11:32.453 Got JSON-RPC error response 00:11:32.453 response: 00:11:32.453 { 00:11:32.453 "code": -32602, 00:11:32.453 "message": "Invalid MN SPDK_Controller\u001f" 00:11:32.453 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:32.453 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.454 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'p,.{LefVtl6;M&wg#>`:@' 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'p,.{LefVtl6;M&wg#>`:@' nqn.2016-06.io.spdk:cnode31816 00:11:32.744 [2024-07-15 09:44:49.469229] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31816: invalid serial number 'p,.{LefVtl6;M&wg#>`:@' 00:11:32.744 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:32.744 { 00:11:32.744 "nqn": "nqn.2016-06.io.spdk:cnode31816", 00:11:32.744 "serial_number": "p,.{LefVtl6;M&wg#>`:@", 00:11:32.744 "method": "nvmf_create_subsystem", 00:11:32.744 "req_id": 1 00:11:32.744 } 00:11:32.744 Got JSON-RPC error response 00:11:32.744 response: 00:11:32.745 { 00:11:32.745 "code": -32602, 00:11:32.745 "message": "Invalid SN p,.{LefVtl6;M&wg#>`:@" 00:11:32.745 }' 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:32.745 { 00:11:32.745 "nqn": "nqn.2016-06.io.spdk:cnode31816", 00:11:32.745 "serial_number": "p,.{LefVtl6;M&wg#>`:@", 00:11:32.745 "method": "nvmf_create_subsystem", 00:11:32.745 "req_id": 1 00:11:32.745 } 00:11:32.745 Got JSON-RPC error response 00:11:32.745 response: 00:11:32.745 { 00:11:32.745 "code": -32602, 00:11:32.745 "message": "Invalid SN p,.{LefVtl6;M&wg#>`:@" 00:11:32.745 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:32.745 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.004 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:33.004 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:33.004 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:33.004 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.005 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ & == \- ]] 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '&h=/Q!=oj'\''itjoQqbNUh{A@Rmh\_4/\$>"Ut1;DY'\''' 00:11:33.006 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '&h=/Q!=oj'\''itjoQqbNUh{A@Rmh\_4/\$>"Ut1;DY'\''' nqn.2016-06.io.spdk:cnode14523 00:11:33.264 [2024-07-15 09:44:49.882576] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14523: invalid model number '&h=/Q!=oj'itjoQqbNUh{A@Rmh\_4/\$>"Ut1;DY'' 00:11:33.264 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:33.264 { 00:11:33.264 "nqn": "nqn.2016-06.io.spdk:cnode14523", 00:11:33.264 "model_number": "&h=/Q!=oj'\''itjoQqbNUh{A@Rmh\\_4/\\$>\"Ut1;DY'\''", 00:11:33.264 "method": "nvmf_create_subsystem", 00:11:33.264 "req_id": 1 00:11:33.264 } 00:11:33.264 Got JSON-RPC error response 00:11:33.264 response: 00:11:33.264 { 00:11:33.264 "code": -32602, 00:11:33.264 "message": "Invalid MN &h=/Q!=oj'\''itjoQqbNUh{A@Rmh\\_4/\\$>\"Ut1;DY'\''" 00:11:33.264 }' 00:11:33.264 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:33.264 { 00:11:33.264 "nqn": "nqn.2016-06.io.spdk:cnode14523", 00:11:33.264 "model_number": "&h=/Q!=oj'itjoQqbNUh{A@Rmh\\_4/\\$>\"Ut1;DY'", 00:11:33.264 "method": "nvmf_create_subsystem", 00:11:33.264 "req_id": 1 00:11:33.264 } 00:11:33.264 Got JSON-RPC error response 00:11:33.264 response: 00:11:33.264 { 00:11:33.264 "code": -32602, 00:11:33.264 "message": "Invalid MN &h=/Q!=oj'itjoQqbNUh{A@Rmh\\_4/\\$>\"Ut1;DY'" 00:11:33.264 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:33.264 09:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:33.521 [2024-07-15 09:44:50.143557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.522 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:33.779 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:33.779 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:33.779 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:33.779 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:33.779 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:34.037 [2024-07-15 09:44:50.665265] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:34.037 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:34.037 { 00:11:34.037 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:34.037 "listen_address": { 00:11:34.037 "trtype": "tcp", 00:11:34.037 "traddr": "", 00:11:34.037 "trsvcid": "4421" 00:11:34.037 }, 00:11:34.037 "method": "nvmf_subsystem_remove_listener", 00:11:34.037 "req_id": 1 00:11:34.037 } 00:11:34.037 Got JSON-RPC error response 00:11:34.037 response: 00:11:34.037 { 00:11:34.037 "code": -32602, 00:11:34.037 "message": "Invalid parameters" 00:11:34.037 }' 00:11:34.037 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:34.037 { 00:11:34.037 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:34.037 "listen_address": { 00:11:34.037 "trtype": "tcp", 00:11:34.037 "traddr": "", 00:11:34.037 "trsvcid": "4421" 00:11:34.037 }, 00:11:34.037 "method": "nvmf_subsystem_remove_listener", 00:11:34.037 "req_id": 1 00:11:34.037 } 00:11:34.037 Got JSON-RPC error response 00:11:34.037 response: 00:11:34.037 { 00:11:34.037 "code": -32602, 00:11:34.037 "message": "Invalid parameters" 00:11:34.037 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:34.037 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11056 -i 0 00:11:34.294 [2024-07-15 09:44:50.909995] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11056: invalid cntlid range [0-65519] 00:11:34.294 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:34.294 { 00:11:34.294 "nqn": "nqn.2016-06.io.spdk:cnode11056", 00:11:34.294 "min_cntlid": 0, 00:11:34.294 "method": "nvmf_create_subsystem", 00:11:34.294 "req_id": 1 00:11:34.294 } 00:11:34.294 Got JSON-RPC error response 00:11:34.294 response: 00:11:34.294 { 00:11:34.294 "code": -32602, 00:11:34.294 "message": "Invalid cntlid range [0-65519]" 00:11:34.294 }' 00:11:34.294 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:34.294 { 00:11:34.294 "nqn": "nqn.2016-06.io.spdk:cnode11056", 00:11:34.294 "min_cntlid": 0, 00:11:34.294 "method": "nvmf_create_subsystem", 00:11:34.294 "req_id": 1 00:11:34.294 } 00:11:34.294 Got JSON-RPC error response 00:11:34.294 response: 00:11:34.294 { 00:11:34.294 "code": -32602, 00:11:34.294 "message": "Invalid cntlid range [0-65519]" 00:11:34.294 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:34.294 09:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15117 -i 65520 00:11:34.552 [2024-07-15 09:44:51.162791] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15117: invalid cntlid range [65520-65519] 00:11:34.552 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:34.552 { 00:11:34.552 "nqn": "nqn.2016-06.io.spdk:cnode15117", 00:11:34.552 "min_cntlid": 65520, 00:11:34.552 "method": "nvmf_create_subsystem", 00:11:34.552 "req_id": 1 00:11:34.552 } 00:11:34.552 Got JSON-RPC error response 00:11:34.552 response: 00:11:34.552 { 00:11:34.552 "code": -32602, 00:11:34.552 "message": "Invalid cntlid range [65520-65519]" 00:11:34.552 }' 00:11:34.552 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:34.552 { 00:11:34.552 "nqn": "nqn.2016-06.io.spdk:cnode15117", 00:11:34.552 "min_cntlid": 65520, 00:11:34.552 "method": "nvmf_create_subsystem", 00:11:34.552 "req_id": 1 00:11:34.552 } 00:11:34.552 Got JSON-RPC error response 00:11:34.552 response: 00:11:34.552 { 00:11:34.552 "code": -32602, 00:11:34.552 "message": "Invalid cntlid range [65520-65519]" 00:11:34.552 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:34.552 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3658 -I 0 00:11:34.810 [2024-07-15 09:44:51.427645] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3658: invalid cntlid range [1-0] 00:11:34.810 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:34.810 { 00:11:34.810 "nqn": "nqn.2016-06.io.spdk:cnode3658", 00:11:34.810 "max_cntlid": 0, 00:11:34.810 "method": "nvmf_create_subsystem", 00:11:34.810 "req_id": 1 00:11:34.810 } 00:11:34.810 Got JSON-RPC error response 00:11:34.810 response: 00:11:34.810 { 00:11:34.810 "code": -32602, 00:11:34.810 "message": "Invalid cntlid range [1-0]" 00:11:34.810 }' 00:11:34.810 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:34.810 { 00:11:34.810 "nqn": "nqn.2016-06.io.spdk:cnode3658", 00:11:34.810 "max_cntlid": 0, 00:11:34.810 "method": "nvmf_create_subsystem", 00:11:34.810 "req_id": 1 00:11:34.810 } 00:11:34.810 Got JSON-RPC error response 00:11:34.810 response: 00:11:34.810 { 00:11:34.810 "code": -32602, 00:11:34.810 "message": "Invalid cntlid range [1-0]" 00:11:34.810 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:34.810 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10421 -I 65520 00:11:35.067 [2024-07-15 09:44:51.672466] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10421: invalid cntlid range [1-65520] 00:11:35.067 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:35.067 { 00:11:35.067 "nqn": "nqn.2016-06.io.spdk:cnode10421", 00:11:35.067 "max_cntlid": 65520, 00:11:35.067 "method": "nvmf_create_subsystem", 00:11:35.067 "req_id": 1 00:11:35.067 } 00:11:35.067 Got JSON-RPC error response 00:11:35.067 response: 00:11:35.067 { 00:11:35.067 "code": -32602, 00:11:35.067 "message": "Invalid cntlid range [1-65520]" 00:11:35.067 }' 00:11:35.067 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:35.067 { 00:11:35.067 "nqn": "nqn.2016-06.io.spdk:cnode10421", 00:11:35.067 "max_cntlid": 65520, 00:11:35.067 "method": "nvmf_create_subsystem", 00:11:35.067 "req_id": 1 00:11:35.067 } 00:11:35.067 Got JSON-RPC error response 00:11:35.067 response: 00:11:35.067 { 00:11:35.067 "code": -32602, 00:11:35.067 "message": "Invalid cntlid range [1-65520]" 00:11:35.067 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:35.067 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8680 -i 6 -I 5 00:11:35.325 [2024-07-15 09:44:51.917270] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8680: invalid cntlid range [6-5] 00:11:35.325 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:35.325 { 00:11:35.325 "nqn": "nqn.2016-06.io.spdk:cnode8680", 00:11:35.325 "min_cntlid": 6, 00:11:35.325 "max_cntlid": 5, 00:11:35.325 "method": "nvmf_create_subsystem", 00:11:35.325 "req_id": 1 00:11:35.325 } 00:11:35.325 Got JSON-RPC error response 00:11:35.325 response: 00:11:35.325 { 00:11:35.325 "code": -32602, 00:11:35.325 "message": "Invalid cntlid range [6-5]" 00:11:35.325 }' 00:11:35.325 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:35.325 { 00:11:35.325 "nqn": "nqn.2016-06.io.spdk:cnode8680", 00:11:35.325 "min_cntlid": 6, 00:11:35.325 "max_cntlid": 5, 00:11:35.325 "method": "nvmf_create_subsystem", 00:11:35.325 "req_id": 1 00:11:35.325 } 00:11:35.325 Got JSON-RPC error response 00:11:35.325 response: 00:11:35.325 { 00:11:35.325 "code": -32602, 00:11:35.325 "message": "Invalid cntlid range [6-5]" 00:11:35.325 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:35.325 09:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:35.325 09:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:35.325 { 00:11:35.325 "name": "foobar", 00:11:35.325 "method": "nvmf_delete_target", 00:11:35.325 "req_id": 1 00:11:35.325 } 00:11:35.325 Got JSON-RPC error response 00:11:35.325 response: 00:11:35.325 { 00:11:35.325 "code": -32602, 00:11:35.325 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:35.325 }' 00:11:35.325 09:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:35.325 { 00:11:35.325 "name": "foobar", 00:11:35.325 "method": "nvmf_delete_target", 00:11:35.325 "req_id": 1 00:11:35.325 } 00:11:35.325 Got JSON-RPC error response 00:11:35.325 response: 00:11:35.325 { 00:11:35.325 "code": -32602, 00:11:35.325 "message": "The specified target doesn't exist, cannot delete it." 00:11:35.325 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:35.325 09:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:35.325 09:44:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:35.325 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:35.325 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:35.325 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:35.326 rmmod nvme_tcp 00:11:35.326 rmmod nvme_fabrics 00:11:35.326 rmmod nvme_keyring 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1833862 ']' 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1833862 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1833862 ']' 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1833862 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:11:35.326 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1833862 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1833862' 00:11:35.584 killing process with pid 1833862 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1833862 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1833862 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.584 09:44:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.121 09:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:38.121 00:11:38.121 real 0m8.556s 00:11:38.121 user 0m19.943s 00:11:38.121 sys 0m2.374s 00:11:38.121 09:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.121 09:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:38.121 ************************************ 00:11:38.121 END TEST nvmf_invalid 00:11:38.121 ************************************ 00:11:38.121 09:44:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:38.121 09:44:54 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:38.121 09:44:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:38.121 09:44:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.121 09:44:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:38.121 ************************************ 00:11:38.121 START TEST nvmf_abort 00:11:38.121 ************************************ 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:38.121 * Looking for test storage... 00:11:38.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.121 09:44:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.122 09:44:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.122 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:38.122 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:38.122 09:44:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:11:38.122 09:44:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.024 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:40.025 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:40.025 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:40.025 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:40.025 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:40.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:11:40.025 00:11:40.025 --- 10.0.0.2 ping statistics --- 00:11:40.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.025 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:11:40.025 00:11:40.025 --- 10.0.0.1 ping statistics --- 00:11:40.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.025 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1836371 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1836371 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1836371 ']' 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.025 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.025 [2024-07-15 09:44:56.670852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:40.025 [2024-07-15 09:44:56.670969] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.025 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.025 [2024-07-15 09:44:56.710275] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:40.025 [2024-07-15 09:44:56.742660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:40.284 [2024-07-15 09:44:56.838003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.284 [2024-07-15 09:44:56.838064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.284 [2024-07-15 09:44:56.838080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.284 [2024-07-15 09:44:56.838093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.284 [2024-07-15 09:44:56.838105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.284 [2024-07-15 09:44:56.838187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.284 [2024-07-15 09:44:56.838243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.284 [2024-07-15 09:44:56.838246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.284 [2024-07-15 09:44:56.975073] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.284 09:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.284 Malloc0 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.284 Delay0 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.284 [2024-07-15 09:44:57.050707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.284 09:44:57 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:40.542 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.542 [2024-07-15 09:44:57.155972] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:43.071 Initializing NVMe Controllers 00:11:43.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:43.071 controller IO queue size 128 less than required 00:11:43.071 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:43.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:43.071 Initialization complete. Launching workers. 00:11:43.071 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33657 00:11:43.071 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33718, failed to submit 62 00:11:43.071 success 33661, unsuccess 57, failed 0 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:43.071 rmmod nvme_tcp 00:11:43.071 rmmod nvme_fabrics 00:11:43.071 rmmod nvme_keyring 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1836371 ']' 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1836371 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1836371 ']' 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1836371 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1836371 00:11:43.071 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1836371' 00:11:43.072 killing process with pid 1836371 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1836371 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1836371 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.072 09:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.973 09:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:44.973 00:11:44.973 real 0m7.183s 00:11:44.973 user 0m10.465s 00:11:44.973 sys 0m2.513s 00:11:44.973 09:45:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.973 09:45:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:44.973 ************************************ 00:11:44.973 END TEST nvmf_abort 00:11:44.973 ************************************ 00:11:44.973 09:45:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:44.973 09:45:01 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:44.973 09:45:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:44.973 09:45:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.973 09:45:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:44.973 ************************************ 00:11:44.973 START TEST nvmf_ns_hotplug_stress 00:11:44.973 ************************************ 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:44.973 * Looking for test storage... 00:11:44.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:44.973 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:44.974 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.974 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:45.233 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:45.233 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:45.233 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.233 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.233 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.233 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:45.233 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:45.233 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:45.233 09:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:47.147 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:47.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:47.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:47.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:47.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:47.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:11:47.148 00:11:47.148 --- 10.0.0.2 ping statistics --- 00:11:47.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.148 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:11:47.148 00:11:47.148 --- 10.0.0.1 ping statistics --- 00:11:47.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.148 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1838827 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1838827 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1838827 ']' 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.148 09:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.407 [2024-07-15 09:45:03.974381] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:47.407 [2024-07-15 09:45:03.974460] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.407 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.407 [2024-07-15 09:45:04.011801] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:47.407 [2024-07-15 09:45:04.041987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:47.407 [2024-07-15 09:45:04.135981] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.407 [2024-07-15 09:45:04.136047] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.407 [2024-07-15 09:45:04.136073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.407 [2024-07-15 09:45:04.136087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.407 [2024-07-15 09:45:04.136100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.407 [2024-07-15 09:45:04.136195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.407 [2024-07-15 09:45:04.136250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.407 [2024-07-15 09:45:04.136254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.664 09:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.664 09:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:11:47.664 09:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:47.664 09:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:47.664 09:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.664 09:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.664 09:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:47.664 09:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:47.921 [2024-07-15 09:45:04.511032] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.921 09:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:48.178 09:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.435 [2024-07-15 09:45:05.021970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.435 09:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:48.693 09:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:48.990 Malloc0 00:11:48.990 09:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:49.272 Delay0 00:11:49.272 09:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.530 09:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:49.788 NULL1 00:11:49.788 09:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:50.045 09:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1839127 00:11:50.045 09:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:50.045 09:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:11:50.045 09:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.045 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.975 Read completed with error (sct=0, sc=11) 00:11:51.232 09:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.489 09:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:51.489 09:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:51.746 true 00:11:51.746 09:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:11:51.746 09:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.308 09:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.565 09:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:52.565 09:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:52.822 true 00:11:52.822 09:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:11:52.822 09:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.079 09:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.336 09:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:53.336 09:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:53.593 true 00:11:53.593 09:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:11:53.593 09:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.849 09:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.106 09:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:54.106 09:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:54.363 true 00:11:54.363 09:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:11:54.363 09:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.740 09:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.740 09:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:55.740 09:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:56.022 true 00:11:56.022 09:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:11:56.022 09:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.280 09:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.546 09:45:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:56.546 09:45:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:56.546 true 00:11:56.805 09:45:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:11:56.805 09:45:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.631 09:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.889 09:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:57.889 09:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:57.889 true 00:11:57.889 09:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:11:57.889 09:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.147 09:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.405 09:45:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:58.405 09:45:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:58.663 true 00:11:58.663 09:45:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:11:58.663 09:45:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.601 09:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.859 09:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:59.859 09:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:00.117 true 00:12:00.117 09:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:00.117 09:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.375 09:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.633 09:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:00.633 09:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:00.891 true 00:12:00.891 09:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:00.891 09:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.828 09:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.085 09:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:02.085 09:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:02.389 true 00:12:02.389 09:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:02.389 09:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.694 09:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.694 09:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:02.694 09:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:02.952 true 00:12:02.952 09:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:02.952 09:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.884 09:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.141 09:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:04.141 09:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:04.398 true 00:12:04.398 09:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:04.398 09:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.654 09:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.911 09:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:04.911 09:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:05.168 true 00:12:05.168 09:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:05.168 09:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.100 09:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:06.358 09:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:06.358 09:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:06.616 true 00:12:06.616 09:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:06.616 09:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.874 09:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.130 09:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:07.130 09:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:07.386 true 00:12:07.386 09:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:07.386 09:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.317 09:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.317 09:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:08.317 09:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:08.574 true 00:12:08.574 09:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:08.574 09:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.832 09:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.089 09:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:09.089 09:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:09.346 true 00:12:09.346 09:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:09.346 09:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.279 09:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.536 09:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:10.536 09:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:10.794 true 00:12:10.794 09:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:10.794 09:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.051 09:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.308 09:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:11.308 09:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:11.565 true 00:12:11.565 09:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:11.565 09:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.498 09:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.498 09:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:12.498 09:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:12.755 true 00:12:12.755 09:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:12.755 09:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.012 09:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.270 09:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:13.270 09:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:13.527 true 00:12:13.527 09:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:13.527 09:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.461 09:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.461 09:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:14.461 09:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:14.719 true 00:12:14.719 09:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:14.719 09:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.976 09:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.234 09:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:15.234 09:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:15.492 true 00:12:15.492 09:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:15.492 09:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.453 09:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.709 09:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:16.709 09:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:16.966 true 00:12:16.966 09:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:16.966 09:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.222 09:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.478 09:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:17.478 09:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:17.735 true 00:12:17.735 09:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:17.735 09:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.991 09:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.247 09:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:18.247 09:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:18.247 true 00:12:18.247 09:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:18.247 09:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.618 09:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.618 09:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:19.618 09:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:20.183 true 00:12:20.183 09:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:20.183 09:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.749 09:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.749 Initializing NVMe Controllers 00:12:20.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:20.749 Controller IO queue size 128, less than required. 00:12:20.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:20.749 Controller IO queue size 128, less than required. 00:12:20.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:20.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:20.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:20.749 Initialization complete. Launching workers. 00:12:20.749 ======================================================== 00:12:20.749 Latency(us) 00:12:20.749 Device Information : IOPS MiB/s Average min max 00:12:20.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 968.50 0.47 74163.84 2769.76 1012638.20 00:12:20.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11435.13 5.58 11193.51 1814.20 449150.88 00:12:20.749 ======================================================== 00:12:20.749 Total : 12403.63 6.06 16110.36 1814.20 1012638.20 00:12:20.749 00:12:21.006 09:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:21.006 09:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:21.263 true 00:12:21.263 09:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839127 00:12:21.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1839127) - No such process 00:12:21.263 09:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1839127 00:12:21.263 09:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.521 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:21.778 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:21.778 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:21.778 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:21.778 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:21.778 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:22.035 null0 00:12:22.035 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:22.035 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:22.035 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:22.293 null1 00:12:22.293 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:22.293 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:22.293 09:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:22.551 null2 00:12:22.551 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:22.551 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:22.551 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:22.809 null3 00:12:22.809 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:22.809 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:22.809 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:23.067 null4 00:12:23.067 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:23.067 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:23.067 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:23.325 null5 00:12:23.325 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:23.325 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:23.325 09:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:23.581 null6 00:12:23.581 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:23.581 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:23.581 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:23.837 null7 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:23.837 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1843684 1843685 1843687 1843689 1843691 1843693 1843695 1843697 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.838 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:24.094 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:24.094 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:24.094 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.094 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.094 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:24.094 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:24.094 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:24.094 09:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.351 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.352 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:24.352 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.352 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.352 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:24.352 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.352 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.352 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:24.609 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.609 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:24.866 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:24.866 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:24.867 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:24.867 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.867 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:24.867 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.124 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:25.381 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:25.381 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:25.381 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:25.382 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:25.382 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:25.382 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:25.382 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.382 09:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.639 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.640 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:25.897 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:25.898 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:25.898 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:25.898 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:25.898 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:25.898 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:25.898 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:25.898 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.156 09:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:26.414 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:26.414 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.414 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:26.414 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:26.414 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:26.414 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:26.414 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.414 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.671 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:26.928 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:26.928 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.928 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:26.928 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:26.928 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:26.928 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:26.928 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.928 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.186 09:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.444 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.444 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:27.444 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:27.444 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.444 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:27.444 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.444 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:27.444 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.702 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:27.959 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.959 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:27.959 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:27.959 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.959 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:27.959 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:27.959 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.959 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.217 09:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:28.477 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:28.477 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.477 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.477 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.739 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:28.997 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.255 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:29.255 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:29.255 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:29.255 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.255 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:29.255 09:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.513 rmmod nvme_tcp 00:12:29.513 rmmod nvme_fabrics 00:12:29.513 rmmod nvme_keyring 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1838827 ']' 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1838827 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1838827 ']' 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1838827 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1838827 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1838827' 00:12:29.513 killing process with pid 1838827 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1838827 00:12:29.513 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1838827 00:12:29.771 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.771 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.771 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.771 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.771 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.771 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.771 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.771 09:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.678 09:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.678 00:12:31.678 real 0m46.747s 00:12:31.678 user 3m32.657s 00:12:31.678 sys 0m16.243s 00:12:31.678 09:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.678 09:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.678 ************************************ 00:12:31.678 END TEST nvmf_ns_hotplug_stress 00:12:31.678 ************************************ 00:12:31.678 09:45:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:31.678 09:45:48 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:31.678 09:45:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:31.678 09:45:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.678 09:45:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.937 ************************************ 00:12:31.937 START TEST nvmf_connect_stress 00:12:31.937 ************************************ 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:31.937 * Looking for test storage... 00:12:31.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:31.937 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.938 09:45:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:12:33.841 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:33.842 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:33.842 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:33.842 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:33.842 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:33.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:12:33.842 00:12:33.842 --- 10.0.0.2 ping statistics --- 00:12:33.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.842 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:12:33.842 00:12:33.842 --- 10.0.0.1 ping statistics --- 00:12:33.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.842 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:33.842 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1846442 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1846442 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1846442 ']' 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.102 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.102 [2024-07-15 09:45:50.681759] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:34.102 [2024-07-15 09:45:50.681859] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.102 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.102 [2024-07-15 09:45:50.720654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:34.102 [2024-07-15 09:45:50.753133] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:34.102 [2024-07-15 09:45:50.847185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.102 [2024-07-15 09:45:50.847237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.102 [2024-07-15 09:45:50.847261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.102 [2024-07-15 09:45:50.847271] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.102 [2024-07-15 09:45:50.847281] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.102 [2024-07-15 09:45:50.847390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.102 [2024-07-15 09:45:50.847460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.102 [2024-07-15 09:45:50.847462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.363 [2024-07-15 09:45:50.991956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.363 09:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.363 [2024-07-15 09:45:51.021128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.363 NULL1 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1846589 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.363 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.623 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.623 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:34.623 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.623 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.623 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.193 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.193 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:35.193 09:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.193 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.193 09:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.453 09:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.453 09:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:35.453 09:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.453 09:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.453 09:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.712 09:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.712 09:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:35.712 09:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.712 09:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.712 09:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.972 09:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.972 09:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:35.972 09:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.972 09:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.972 09:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.231 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.231 09:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:36.231 09:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.231 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.231 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.799 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.799 09:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:36.799 09:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.799 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.799 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.057 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.057 09:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:37.058 09:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.058 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.058 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.316 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.316 09:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:37.316 09:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.316 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.316 09:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.573 09:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.573 09:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:37.573 09:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.573 09:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.573 09:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.143 09:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.143 09:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:38.143 09:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.143 09:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.143 09:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.401 09:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.401 09:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:38.401 09:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.401 09:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.401 09:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.659 09:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.659 09:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:38.659 09:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.659 09:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.659 09:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.918 09:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.918 09:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:38.918 09:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.918 09:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.918 09:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.178 09:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.178 09:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:39.178 09:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.178 09:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.178 09:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.746 09:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.746 09:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:39.746 09:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.746 09:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.746 09:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.004 09:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.004 09:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:40.004 09:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.004 09:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.004 09:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.263 09:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.263 09:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:40.263 09:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.263 09:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.263 09:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.523 09:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.523 09:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:40.523 09:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.523 09:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.523 09:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.781 09:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.782 09:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:40.782 09:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.782 09:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.782 09:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.347 09:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.347 09:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:41.347 09:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.347 09:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.347 09:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.606 09:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.606 09:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:41.606 09:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.606 09:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.606 09:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.865 09:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.865 09:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:41.865 09:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.865 09:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.865 09:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.122 09:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.122 09:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:42.122 09:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.122 09:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.122 09:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.382 09:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.382 09:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:42.382 09:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.382 09:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.382 09:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.989 09:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.989 09:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:42.989 09:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.989 09:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.989 09:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.248 09:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.248 09:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:43.248 09:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.248 09:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.248 09:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.508 09:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.508 09:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:43.508 09:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.508 09:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.508 09:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.765 09:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.765 09:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:43.765 09:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.765 09:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.765 09:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 09:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.021 09:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:44.021 09:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.021 09:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.021 09:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.278 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.278 09:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:44.278 09:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.278 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.278 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.536 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1846589 00:12:44.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1846589) - No such process 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1846589 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.796 rmmod nvme_tcp 00:12:44.796 rmmod nvme_fabrics 00:12:44.796 rmmod nvme_keyring 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1846442 ']' 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1846442 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1846442 ']' 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1846442 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1846442 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1846442' 00:12:44.796 killing process with pid 1846442 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1846442 00:12:44.796 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1846442 00:12:45.055 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:45.055 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:45.055 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:45.055 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.055 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:45.055 09:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.055 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.055 09:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.953 09:46:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.953 00:12:46.953 real 0m15.236s 00:12:46.953 user 0m37.991s 00:12:46.953 sys 0m6.035s 00:12:46.953 09:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.953 09:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.953 ************************************ 00:12:46.953 END TEST nvmf_connect_stress 00:12:46.953 ************************************ 00:12:47.211 09:46:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:47.211 09:46:03 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:47.211 09:46:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:47.211 09:46:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.211 09:46:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:47.211 ************************************ 00:12:47.211 START TEST nvmf_fused_ordering 00:12:47.211 ************************************ 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:47.212 * Looking for test storage... 00:12:47.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:47.212 09:46:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:49.116 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:49.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:49.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:49.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:49.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.117 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.377 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.378 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.378 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:49.378 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.378 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.378 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.378 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:49.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:12:49.378 00:12:49.378 --- 10.0.0.2 ping statistics --- 00:12:49.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.378 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:12:49.378 09:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:12:49.378 00:12:49.378 --- 10.0.0.1 ping statistics --- 00:12:49.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.378 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1849735 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1849735 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1849735 ']' 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.378 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:49.378 [2024-07-15 09:46:06.075360] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:49.378 [2024-07-15 09:46:06.075439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.378 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.378 [2024-07-15 09:46:06.114427] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:49.378 [2024-07-15 09:46:06.140947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.638 [2024-07-15 09:46:06.232304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.638 [2024-07-15 09:46:06.232357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.638 [2024-07-15 09:46:06.232371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.638 [2024-07-15 09:46:06.232382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.638 [2024-07-15 09:46:06.232391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.638 [2024-07-15 09:46:06.232415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:49.638 [2024-07-15 09:46:06.376665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:49.638 [2024-07-15 09:46:06.392889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:49.638 NULL1 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.638 09:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:49.896 [2024-07-15 09:46:06.439856] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:49.897 [2024-07-15 09:46:06.439917] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849759 ] 00:12:49.897 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.897 [2024-07-15 09:46:06.472394] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:50.464 Attached to nqn.2016-06.io.spdk:cnode1 00:12:50.464 Namespace ID: 1 size: 1GB 00:12:50.464 fused_ordering(0) 00:12:50.464 fused_ordering(1) 00:12:50.464 fused_ordering(2) 00:12:50.464 fused_ordering(3) 00:12:50.464 fused_ordering(4) 00:12:50.464 fused_ordering(5) 00:12:50.464 fused_ordering(6) 00:12:50.464 fused_ordering(7) 00:12:50.464 fused_ordering(8) 00:12:50.464 fused_ordering(9) 00:12:50.464 fused_ordering(10) 00:12:50.464 fused_ordering(11) 00:12:50.464 fused_ordering(12) 00:12:50.464 fused_ordering(13) 00:12:50.464 fused_ordering(14) 00:12:50.464 fused_ordering(15) 00:12:50.464 fused_ordering(16) 00:12:50.464 fused_ordering(17) 00:12:50.464 fused_ordering(18) 00:12:50.464 fused_ordering(19) 00:12:50.464 fused_ordering(20) 00:12:50.464 fused_ordering(21) 00:12:50.464 fused_ordering(22) 00:12:50.464 fused_ordering(23) 00:12:50.464 fused_ordering(24) 00:12:50.464 fused_ordering(25) 00:12:50.464 fused_ordering(26) 00:12:50.464 fused_ordering(27) 00:12:50.464 fused_ordering(28) 00:12:50.464 fused_ordering(29) 00:12:50.464 fused_ordering(30) 00:12:50.464 fused_ordering(31) 00:12:50.464 fused_ordering(32) 00:12:50.464 fused_ordering(33) 00:12:50.464 fused_ordering(34) 00:12:50.464 fused_ordering(35) 00:12:50.464 fused_ordering(36) 00:12:50.464 fused_ordering(37) 00:12:50.464 fused_ordering(38) 00:12:50.464 fused_ordering(39) 00:12:50.464 fused_ordering(40) 00:12:50.464 fused_ordering(41) 00:12:50.464 fused_ordering(42) 00:12:50.464 fused_ordering(43) 00:12:50.464 fused_ordering(44) 00:12:50.464 fused_ordering(45) 00:12:50.464 fused_ordering(46) 00:12:50.464 fused_ordering(47) 00:12:50.464 fused_ordering(48) 00:12:50.464 fused_ordering(49) 00:12:50.464 fused_ordering(50) 00:12:50.464 fused_ordering(51) 00:12:50.464 fused_ordering(52) 00:12:50.464 fused_ordering(53) 00:12:50.464 fused_ordering(54) 00:12:50.464 fused_ordering(55) 00:12:50.464 fused_ordering(56) 00:12:50.464 fused_ordering(57) 00:12:50.464 fused_ordering(58) 00:12:50.464 fused_ordering(59) 00:12:50.464 fused_ordering(60) 00:12:50.464 fused_ordering(61) 00:12:50.464 fused_ordering(62) 00:12:50.464 fused_ordering(63) 00:12:50.464 fused_ordering(64) 00:12:50.464 fused_ordering(65) 00:12:50.464 fused_ordering(66) 00:12:50.464 fused_ordering(67) 00:12:50.464 fused_ordering(68) 00:12:50.464 fused_ordering(69) 00:12:50.464 fused_ordering(70) 00:12:50.464 fused_ordering(71) 00:12:50.464 fused_ordering(72) 00:12:50.464 fused_ordering(73) 00:12:50.464 fused_ordering(74) 00:12:50.464 fused_ordering(75) 00:12:50.464 fused_ordering(76) 00:12:50.464 fused_ordering(77) 00:12:50.464 fused_ordering(78) 00:12:50.464 fused_ordering(79) 00:12:50.464 fused_ordering(80) 00:12:50.464 fused_ordering(81) 00:12:50.464 fused_ordering(82) 00:12:50.464 fused_ordering(83) 00:12:50.464 fused_ordering(84) 00:12:50.464 fused_ordering(85) 00:12:50.464 fused_ordering(86) 00:12:50.464 fused_ordering(87) 00:12:50.464 fused_ordering(88) 00:12:50.464 fused_ordering(89) 00:12:50.464 fused_ordering(90) 00:12:50.464 fused_ordering(91) 00:12:50.464 fused_ordering(92) 00:12:50.464 fused_ordering(93) 00:12:50.464 fused_ordering(94) 00:12:50.464 fused_ordering(95) 00:12:50.464 fused_ordering(96) 00:12:50.464 fused_ordering(97) 00:12:50.464 fused_ordering(98) 00:12:50.464 fused_ordering(99) 00:12:50.464 fused_ordering(100) 00:12:50.464 fused_ordering(101) 00:12:50.464 fused_ordering(102) 00:12:50.464 fused_ordering(103) 00:12:50.464 fused_ordering(104) 00:12:50.464 fused_ordering(105) 00:12:50.464 fused_ordering(106) 00:12:50.464 fused_ordering(107) 00:12:50.464 fused_ordering(108) 00:12:50.464 fused_ordering(109) 00:12:50.464 fused_ordering(110) 00:12:50.464 fused_ordering(111) 00:12:50.464 fused_ordering(112) 00:12:50.464 fused_ordering(113) 00:12:50.464 fused_ordering(114) 00:12:50.464 fused_ordering(115) 00:12:50.464 fused_ordering(116) 00:12:50.464 fused_ordering(117) 00:12:50.464 fused_ordering(118) 00:12:50.464 fused_ordering(119) 00:12:50.464 fused_ordering(120) 00:12:50.464 fused_ordering(121) 00:12:50.464 fused_ordering(122) 00:12:50.464 fused_ordering(123) 00:12:50.464 fused_ordering(124) 00:12:50.464 fused_ordering(125) 00:12:50.464 fused_ordering(126) 00:12:50.464 fused_ordering(127) 00:12:50.464 fused_ordering(128) 00:12:50.464 fused_ordering(129) 00:12:50.464 fused_ordering(130) 00:12:50.464 fused_ordering(131) 00:12:50.464 fused_ordering(132) 00:12:50.464 fused_ordering(133) 00:12:50.464 fused_ordering(134) 00:12:50.464 fused_ordering(135) 00:12:50.464 fused_ordering(136) 00:12:50.464 fused_ordering(137) 00:12:50.464 fused_ordering(138) 00:12:50.464 fused_ordering(139) 00:12:50.464 fused_ordering(140) 00:12:50.464 fused_ordering(141) 00:12:50.464 fused_ordering(142) 00:12:50.464 fused_ordering(143) 00:12:50.464 fused_ordering(144) 00:12:50.464 fused_ordering(145) 00:12:50.464 fused_ordering(146) 00:12:50.464 fused_ordering(147) 00:12:50.464 fused_ordering(148) 00:12:50.464 fused_ordering(149) 00:12:50.464 fused_ordering(150) 00:12:50.464 fused_ordering(151) 00:12:50.464 fused_ordering(152) 00:12:50.464 fused_ordering(153) 00:12:50.464 fused_ordering(154) 00:12:50.464 fused_ordering(155) 00:12:50.464 fused_ordering(156) 00:12:50.464 fused_ordering(157) 00:12:50.464 fused_ordering(158) 00:12:50.464 fused_ordering(159) 00:12:50.464 fused_ordering(160) 00:12:50.464 fused_ordering(161) 00:12:50.464 fused_ordering(162) 00:12:50.464 fused_ordering(163) 00:12:50.464 fused_ordering(164) 00:12:50.464 fused_ordering(165) 00:12:50.464 fused_ordering(166) 00:12:50.464 fused_ordering(167) 00:12:50.464 fused_ordering(168) 00:12:50.464 fused_ordering(169) 00:12:50.464 fused_ordering(170) 00:12:50.464 fused_ordering(171) 00:12:50.464 fused_ordering(172) 00:12:50.464 fused_ordering(173) 00:12:50.464 fused_ordering(174) 00:12:50.464 fused_ordering(175) 00:12:50.464 fused_ordering(176) 00:12:50.464 fused_ordering(177) 00:12:50.464 fused_ordering(178) 00:12:50.464 fused_ordering(179) 00:12:50.464 fused_ordering(180) 00:12:50.464 fused_ordering(181) 00:12:50.464 fused_ordering(182) 00:12:50.464 fused_ordering(183) 00:12:50.464 fused_ordering(184) 00:12:50.464 fused_ordering(185) 00:12:50.464 fused_ordering(186) 00:12:50.464 fused_ordering(187) 00:12:50.464 fused_ordering(188) 00:12:50.464 fused_ordering(189) 00:12:50.464 fused_ordering(190) 00:12:50.464 fused_ordering(191) 00:12:50.465 fused_ordering(192) 00:12:50.465 fused_ordering(193) 00:12:50.465 fused_ordering(194) 00:12:50.465 fused_ordering(195) 00:12:50.465 fused_ordering(196) 00:12:50.465 fused_ordering(197) 00:12:50.465 fused_ordering(198) 00:12:50.465 fused_ordering(199) 00:12:50.465 fused_ordering(200) 00:12:50.465 fused_ordering(201) 00:12:50.465 fused_ordering(202) 00:12:50.465 fused_ordering(203) 00:12:50.465 fused_ordering(204) 00:12:50.465 fused_ordering(205) 00:12:50.723 fused_ordering(206) 00:12:50.723 fused_ordering(207) 00:12:50.723 fused_ordering(208) 00:12:50.723 fused_ordering(209) 00:12:50.723 fused_ordering(210) 00:12:50.723 fused_ordering(211) 00:12:50.723 fused_ordering(212) 00:12:50.723 fused_ordering(213) 00:12:50.723 fused_ordering(214) 00:12:50.723 fused_ordering(215) 00:12:50.723 fused_ordering(216) 00:12:50.723 fused_ordering(217) 00:12:50.723 fused_ordering(218) 00:12:50.723 fused_ordering(219) 00:12:50.723 fused_ordering(220) 00:12:50.723 fused_ordering(221) 00:12:50.723 fused_ordering(222) 00:12:50.723 fused_ordering(223) 00:12:50.723 fused_ordering(224) 00:12:50.723 fused_ordering(225) 00:12:50.723 fused_ordering(226) 00:12:50.723 fused_ordering(227) 00:12:50.723 fused_ordering(228) 00:12:50.723 fused_ordering(229) 00:12:50.723 fused_ordering(230) 00:12:50.723 fused_ordering(231) 00:12:50.723 fused_ordering(232) 00:12:50.723 fused_ordering(233) 00:12:50.723 fused_ordering(234) 00:12:50.723 fused_ordering(235) 00:12:50.723 fused_ordering(236) 00:12:50.723 fused_ordering(237) 00:12:50.723 fused_ordering(238) 00:12:50.723 fused_ordering(239) 00:12:50.723 fused_ordering(240) 00:12:50.723 fused_ordering(241) 00:12:50.723 fused_ordering(242) 00:12:50.723 fused_ordering(243) 00:12:50.723 fused_ordering(244) 00:12:50.723 fused_ordering(245) 00:12:50.723 fused_ordering(246) 00:12:50.723 fused_ordering(247) 00:12:50.723 fused_ordering(248) 00:12:50.723 fused_ordering(249) 00:12:50.723 fused_ordering(250) 00:12:50.723 fused_ordering(251) 00:12:50.723 fused_ordering(252) 00:12:50.723 fused_ordering(253) 00:12:50.723 fused_ordering(254) 00:12:50.723 fused_ordering(255) 00:12:50.723 fused_ordering(256) 00:12:50.723 fused_ordering(257) 00:12:50.723 fused_ordering(258) 00:12:50.723 fused_ordering(259) 00:12:50.723 fused_ordering(260) 00:12:50.723 fused_ordering(261) 00:12:50.723 fused_ordering(262) 00:12:50.723 fused_ordering(263) 00:12:50.723 fused_ordering(264) 00:12:50.723 fused_ordering(265) 00:12:50.723 fused_ordering(266) 00:12:50.723 fused_ordering(267) 00:12:50.723 fused_ordering(268) 00:12:50.723 fused_ordering(269) 00:12:50.723 fused_ordering(270) 00:12:50.723 fused_ordering(271) 00:12:50.723 fused_ordering(272) 00:12:50.723 fused_ordering(273) 00:12:50.723 fused_ordering(274) 00:12:50.723 fused_ordering(275) 00:12:50.723 fused_ordering(276) 00:12:50.723 fused_ordering(277) 00:12:50.723 fused_ordering(278) 00:12:50.723 fused_ordering(279) 00:12:50.723 fused_ordering(280) 00:12:50.723 fused_ordering(281) 00:12:50.723 fused_ordering(282) 00:12:50.723 fused_ordering(283) 00:12:50.723 fused_ordering(284) 00:12:50.723 fused_ordering(285) 00:12:50.723 fused_ordering(286) 00:12:50.723 fused_ordering(287) 00:12:50.723 fused_ordering(288) 00:12:50.723 fused_ordering(289) 00:12:50.723 fused_ordering(290) 00:12:50.723 fused_ordering(291) 00:12:50.723 fused_ordering(292) 00:12:50.723 fused_ordering(293) 00:12:50.723 fused_ordering(294) 00:12:50.723 fused_ordering(295) 00:12:50.723 fused_ordering(296) 00:12:50.723 fused_ordering(297) 00:12:50.723 fused_ordering(298) 00:12:50.723 fused_ordering(299) 00:12:50.723 fused_ordering(300) 00:12:50.723 fused_ordering(301) 00:12:50.723 fused_ordering(302) 00:12:50.723 fused_ordering(303) 00:12:50.723 fused_ordering(304) 00:12:50.723 fused_ordering(305) 00:12:50.723 fused_ordering(306) 00:12:50.723 fused_ordering(307) 00:12:50.723 fused_ordering(308) 00:12:50.723 fused_ordering(309) 00:12:50.723 fused_ordering(310) 00:12:50.723 fused_ordering(311) 00:12:50.723 fused_ordering(312) 00:12:50.723 fused_ordering(313) 00:12:50.723 fused_ordering(314) 00:12:50.723 fused_ordering(315) 00:12:50.723 fused_ordering(316) 00:12:50.723 fused_ordering(317) 00:12:50.723 fused_ordering(318) 00:12:50.723 fused_ordering(319) 00:12:50.723 fused_ordering(320) 00:12:50.723 fused_ordering(321) 00:12:50.723 fused_ordering(322) 00:12:50.723 fused_ordering(323) 00:12:50.723 fused_ordering(324) 00:12:50.723 fused_ordering(325) 00:12:50.723 fused_ordering(326) 00:12:50.723 fused_ordering(327) 00:12:50.723 fused_ordering(328) 00:12:50.723 fused_ordering(329) 00:12:50.723 fused_ordering(330) 00:12:50.723 fused_ordering(331) 00:12:50.723 fused_ordering(332) 00:12:50.723 fused_ordering(333) 00:12:50.723 fused_ordering(334) 00:12:50.723 fused_ordering(335) 00:12:50.723 fused_ordering(336) 00:12:50.723 fused_ordering(337) 00:12:50.723 fused_ordering(338) 00:12:50.723 fused_ordering(339) 00:12:50.723 fused_ordering(340) 00:12:50.723 fused_ordering(341) 00:12:50.723 fused_ordering(342) 00:12:50.723 fused_ordering(343) 00:12:50.723 fused_ordering(344) 00:12:50.723 fused_ordering(345) 00:12:50.723 fused_ordering(346) 00:12:50.723 fused_ordering(347) 00:12:50.723 fused_ordering(348) 00:12:50.723 fused_ordering(349) 00:12:50.723 fused_ordering(350) 00:12:50.723 fused_ordering(351) 00:12:50.723 fused_ordering(352) 00:12:50.723 fused_ordering(353) 00:12:50.723 fused_ordering(354) 00:12:50.723 fused_ordering(355) 00:12:50.723 fused_ordering(356) 00:12:50.723 fused_ordering(357) 00:12:50.723 fused_ordering(358) 00:12:50.723 fused_ordering(359) 00:12:50.723 fused_ordering(360) 00:12:50.723 fused_ordering(361) 00:12:50.723 fused_ordering(362) 00:12:50.723 fused_ordering(363) 00:12:50.723 fused_ordering(364) 00:12:50.723 fused_ordering(365) 00:12:50.723 fused_ordering(366) 00:12:50.723 fused_ordering(367) 00:12:50.723 fused_ordering(368) 00:12:50.723 fused_ordering(369) 00:12:50.723 fused_ordering(370) 00:12:50.723 fused_ordering(371) 00:12:50.723 fused_ordering(372) 00:12:50.723 fused_ordering(373) 00:12:50.723 fused_ordering(374) 00:12:50.723 fused_ordering(375) 00:12:50.723 fused_ordering(376) 00:12:50.723 fused_ordering(377) 00:12:50.723 fused_ordering(378) 00:12:50.723 fused_ordering(379) 00:12:50.723 fused_ordering(380) 00:12:50.723 fused_ordering(381) 00:12:50.723 fused_ordering(382) 00:12:50.723 fused_ordering(383) 00:12:50.723 fused_ordering(384) 00:12:50.723 fused_ordering(385) 00:12:50.723 fused_ordering(386) 00:12:50.723 fused_ordering(387) 00:12:50.723 fused_ordering(388) 00:12:50.723 fused_ordering(389) 00:12:50.723 fused_ordering(390) 00:12:50.723 fused_ordering(391) 00:12:50.723 fused_ordering(392) 00:12:50.723 fused_ordering(393) 00:12:50.723 fused_ordering(394) 00:12:50.723 fused_ordering(395) 00:12:50.723 fused_ordering(396) 00:12:50.723 fused_ordering(397) 00:12:50.723 fused_ordering(398) 00:12:50.723 fused_ordering(399) 00:12:50.723 fused_ordering(400) 00:12:50.723 fused_ordering(401) 00:12:50.723 fused_ordering(402) 00:12:50.723 fused_ordering(403) 00:12:50.723 fused_ordering(404) 00:12:50.723 fused_ordering(405) 00:12:50.723 fused_ordering(406) 00:12:50.723 fused_ordering(407) 00:12:50.723 fused_ordering(408) 00:12:50.723 fused_ordering(409) 00:12:50.723 fused_ordering(410) 00:12:51.288 fused_ordering(411) 00:12:51.288 fused_ordering(412) 00:12:51.288 fused_ordering(413) 00:12:51.288 fused_ordering(414) 00:12:51.288 fused_ordering(415) 00:12:51.288 fused_ordering(416) 00:12:51.288 fused_ordering(417) 00:12:51.288 fused_ordering(418) 00:12:51.288 fused_ordering(419) 00:12:51.288 fused_ordering(420) 00:12:51.288 fused_ordering(421) 00:12:51.288 fused_ordering(422) 00:12:51.288 fused_ordering(423) 00:12:51.288 fused_ordering(424) 00:12:51.288 fused_ordering(425) 00:12:51.288 fused_ordering(426) 00:12:51.288 fused_ordering(427) 00:12:51.288 fused_ordering(428) 00:12:51.288 fused_ordering(429) 00:12:51.288 fused_ordering(430) 00:12:51.288 fused_ordering(431) 00:12:51.288 fused_ordering(432) 00:12:51.288 fused_ordering(433) 00:12:51.288 fused_ordering(434) 00:12:51.288 fused_ordering(435) 00:12:51.288 fused_ordering(436) 00:12:51.288 fused_ordering(437) 00:12:51.288 fused_ordering(438) 00:12:51.288 fused_ordering(439) 00:12:51.288 fused_ordering(440) 00:12:51.288 fused_ordering(441) 00:12:51.288 fused_ordering(442) 00:12:51.288 fused_ordering(443) 00:12:51.288 fused_ordering(444) 00:12:51.288 fused_ordering(445) 00:12:51.288 fused_ordering(446) 00:12:51.288 fused_ordering(447) 00:12:51.288 fused_ordering(448) 00:12:51.288 fused_ordering(449) 00:12:51.288 fused_ordering(450) 00:12:51.288 fused_ordering(451) 00:12:51.288 fused_ordering(452) 00:12:51.288 fused_ordering(453) 00:12:51.288 fused_ordering(454) 00:12:51.288 fused_ordering(455) 00:12:51.288 fused_ordering(456) 00:12:51.288 fused_ordering(457) 00:12:51.288 fused_ordering(458) 00:12:51.288 fused_ordering(459) 00:12:51.288 fused_ordering(460) 00:12:51.288 fused_ordering(461) 00:12:51.288 fused_ordering(462) 00:12:51.288 fused_ordering(463) 00:12:51.288 fused_ordering(464) 00:12:51.288 fused_ordering(465) 00:12:51.288 fused_ordering(466) 00:12:51.288 fused_ordering(467) 00:12:51.288 fused_ordering(468) 00:12:51.288 fused_ordering(469) 00:12:51.288 fused_ordering(470) 00:12:51.288 fused_ordering(471) 00:12:51.288 fused_ordering(472) 00:12:51.288 fused_ordering(473) 00:12:51.288 fused_ordering(474) 00:12:51.288 fused_ordering(475) 00:12:51.288 fused_ordering(476) 00:12:51.288 fused_ordering(477) 00:12:51.288 fused_ordering(478) 00:12:51.288 fused_ordering(479) 00:12:51.288 fused_ordering(480) 00:12:51.288 fused_ordering(481) 00:12:51.288 fused_ordering(482) 00:12:51.288 fused_ordering(483) 00:12:51.288 fused_ordering(484) 00:12:51.288 fused_ordering(485) 00:12:51.288 fused_ordering(486) 00:12:51.288 fused_ordering(487) 00:12:51.288 fused_ordering(488) 00:12:51.288 fused_ordering(489) 00:12:51.288 fused_ordering(490) 00:12:51.288 fused_ordering(491) 00:12:51.288 fused_ordering(492) 00:12:51.288 fused_ordering(493) 00:12:51.288 fused_ordering(494) 00:12:51.288 fused_ordering(495) 00:12:51.288 fused_ordering(496) 00:12:51.288 fused_ordering(497) 00:12:51.288 fused_ordering(498) 00:12:51.288 fused_ordering(499) 00:12:51.288 fused_ordering(500) 00:12:51.288 fused_ordering(501) 00:12:51.288 fused_ordering(502) 00:12:51.288 fused_ordering(503) 00:12:51.288 fused_ordering(504) 00:12:51.288 fused_ordering(505) 00:12:51.288 fused_ordering(506) 00:12:51.288 fused_ordering(507) 00:12:51.288 fused_ordering(508) 00:12:51.288 fused_ordering(509) 00:12:51.288 fused_ordering(510) 00:12:51.288 fused_ordering(511) 00:12:51.288 fused_ordering(512) 00:12:51.288 fused_ordering(513) 00:12:51.288 fused_ordering(514) 00:12:51.288 fused_ordering(515) 00:12:51.288 fused_ordering(516) 00:12:51.288 fused_ordering(517) 00:12:51.288 fused_ordering(518) 00:12:51.288 fused_ordering(519) 00:12:51.288 fused_ordering(520) 00:12:51.288 fused_ordering(521) 00:12:51.288 fused_ordering(522) 00:12:51.288 fused_ordering(523) 00:12:51.288 fused_ordering(524) 00:12:51.288 fused_ordering(525) 00:12:51.288 fused_ordering(526) 00:12:51.288 fused_ordering(527) 00:12:51.288 fused_ordering(528) 00:12:51.288 fused_ordering(529) 00:12:51.288 fused_ordering(530) 00:12:51.288 fused_ordering(531) 00:12:51.288 fused_ordering(532) 00:12:51.288 fused_ordering(533) 00:12:51.288 fused_ordering(534) 00:12:51.288 fused_ordering(535) 00:12:51.288 fused_ordering(536) 00:12:51.288 fused_ordering(537) 00:12:51.288 fused_ordering(538) 00:12:51.288 fused_ordering(539) 00:12:51.288 fused_ordering(540) 00:12:51.288 fused_ordering(541) 00:12:51.288 fused_ordering(542) 00:12:51.288 fused_ordering(543) 00:12:51.288 fused_ordering(544) 00:12:51.288 fused_ordering(545) 00:12:51.288 fused_ordering(546) 00:12:51.288 fused_ordering(547) 00:12:51.288 fused_ordering(548) 00:12:51.288 fused_ordering(549) 00:12:51.288 fused_ordering(550) 00:12:51.288 fused_ordering(551) 00:12:51.288 fused_ordering(552) 00:12:51.288 fused_ordering(553) 00:12:51.288 fused_ordering(554) 00:12:51.288 fused_ordering(555) 00:12:51.288 fused_ordering(556) 00:12:51.288 fused_ordering(557) 00:12:51.288 fused_ordering(558) 00:12:51.288 fused_ordering(559) 00:12:51.288 fused_ordering(560) 00:12:51.288 fused_ordering(561) 00:12:51.288 fused_ordering(562) 00:12:51.288 fused_ordering(563) 00:12:51.288 fused_ordering(564) 00:12:51.288 fused_ordering(565) 00:12:51.288 fused_ordering(566) 00:12:51.288 fused_ordering(567) 00:12:51.288 fused_ordering(568) 00:12:51.288 fused_ordering(569) 00:12:51.288 fused_ordering(570) 00:12:51.288 fused_ordering(571) 00:12:51.289 fused_ordering(572) 00:12:51.289 fused_ordering(573) 00:12:51.289 fused_ordering(574) 00:12:51.289 fused_ordering(575) 00:12:51.289 fused_ordering(576) 00:12:51.289 fused_ordering(577) 00:12:51.289 fused_ordering(578) 00:12:51.289 fused_ordering(579) 00:12:51.289 fused_ordering(580) 00:12:51.289 fused_ordering(581) 00:12:51.289 fused_ordering(582) 00:12:51.289 fused_ordering(583) 00:12:51.289 fused_ordering(584) 00:12:51.289 fused_ordering(585) 00:12:51.289 fused_ordering(586) 00:12:51.289 fused_ordering(587) 00:12:51.289 fused_ordering(588) 00:12:51.289 fused_ordering(589) 00:12:51.289 fused_ordering(590) 00:12:51.289 fused_ordering(591) 00:12:51.289 fused_ordering(592) 00:12:51.289 fused_ordering(593) 00:12:51.289 fused_ordering(594) 00:12:51.289 fused_ordering(595) 00:12:51.289 fused_ordering(596) 00:12:51.289 fused_ordering(597) 00:12:51.289 fused_ordering(598) 00:12:51.289 fused_ordering(599) 00:12:51.289 fused_ordering(600) 00:12:51.289 fused_ordering(601) 00:12:51.289 fused_ordering(602) 00:12:51.289 fused_ordering(603) 00:12:51.289 fused_ordering(604) 00:12:51.289 fused_ordering(605) 00:12:51.289 fused_ordering(606) 00:12:51.289 fused_ordering(607) 00:12:51.289 fused_ordering(608) 00:12:51.289 fused_ordering(609) 00:12:51.289 fused_ordering(610) 00:12:51.289 fused_ordering(611) 00:12:51.289 fused_ordering(612) 00:12:51.289 fused_ordering(613) 00:12:51.289 fused_ordering(614) 00:12:51.289 fused_ordering(615) 00:12:52.226 fused_ordering(616) 00:12:52.226 fused_ordering(617) 00:12:52.226 fused_ordering(618) 00:12:52.226 fused_ordering(619) 00:12:52.226 fused_ordering(620) 00:12:52.226 fused_ordering(621) 00:12:52.226 fused_ordering(622) 00:12:52.226 fused_ordering(623) 00:12:52.226 fused_ordering(624) 00:12:52.226 fused_ordering(625) 00:12:52.226 fused_ordering(626) 00:12:52.226 fused_ordering(627) 00:12:52.226 fused_ordering(628) 00:12:52.226 fused_ordering(629) 00:12:52.226 fused_ordering(630) 00:12:52.226 fused_ordering(631) 00:12:52.226 fused_ordering(632) 00:12:52.226 fused_ordering(633) 00:12:52.226 fused_ordering(634) 00:12:52.226 fused_ordering(635) 00:12:52.226 fused_ordering(636) 00:12:52.226 fused_ordering(637) 00:12:52.226 fused_ordering(638) 00:12:52.226 fused_ordering(639) 00:12:52.226 fused_ordering(640) 00:12:52.226 fused_ordering(641) 00:12:52.226 fused_ordering(642) 00:12:52.226 fused_ordering(643) 00:12:52.226 fused_ordering(644) 00:12:52.226 fused_ordering(645) 00:12:52.226 fused_ordering(646) 00:12:52.226 fused_ordering(647) 00:12:52.226 fused_ordering(648) 00:12:52.226 fused_ordering(649) 00:12:52.226 fused_ordering(650) 00:12:52.226 fused_ordering(651) 00:12:52.226 fused_ordering(652) 00:12:52.226 fused_ordering(653) 00:12:52.226 fused_ordering(654) 00:12:52.226 fused_ordering(655) 00:12:52.226 fused_ordering(656) 00:12:52.226 fused_ordering(657) 00:12:52.226 fused_ordering(658) 00:12:52.226 fused_ordering(659) 00:12:52.226 fused_ordering(660) 00:12:52.226 fused_ordering(661) 00:12:52.226 fused_ordering(662) 00:12:52.226 fused_ordering(663) 00:12:52.226 fused_ordering(664) 00:12:52.226 fused_ordering(665) 00:12:52.226 fused_ordering(666) 00:12:52.226 fused_ordering(667) 00:12:52.226 fused_ordering(668) 00:12:52.226 fused_ordering(669) 00:12:52.226 fused_ordering(670) 00:12:52.226 fused_ordering(671) 00:12:52.226 fused_ordering(672) 00:12:52.226 fused_ordering(673) 00:12:52.226 fused_ordering(674) 00:12:52.226 fused_ordering(675) 00:12:52.226 fused_ordering(676) 00:12:52.226 fused_ordering(677) 00:12:52.226 fused_ordering(678) 00:12:52.226 fused_ordering(679) 00:12:52.226 fused_ordering(680) 00:12:52.226 fused_ordering(681) 00:12:52.226 fused_ordering(682) 00:12:52.226 fused_ordering(683) 00:12:52.226 fused_ordering(684) 00:12:52.226 fused_ordering(685) 00:12:52.226 fused_ordering(686) 00:12:52.226 fused_ordering(687) 00:12:52.226 fused_ordering(688) 00:12:52.226 fused_ordering(689) 00:12:52.226 fused_ordering(690) 00:12:52.226 fused_ordering(691) 00:12:52.226 fused_ordering(692) 00:12:52.226 fused_ordering(693) 00:12:52.226 fused_ordering(694) 00:12:52.226 fused_ordering(695) 00:12:52.226 fused_ordering(696) 00:12:52.226 fused_ordering(697) 00:12:52.226 fused_ordering(698) 00:12:52.226 fused_ordering(699) 00:12:52.226 fused_ordering(700) 00:12:52.226 fused_ordering(701) 00:12:52.226 fused_ordering(702) 00:12:52.226 fused_ordering(703) 00:12:52.226 fused_ordering(704) 00:12:52.226 fused_ordering(705) 00:12:52.226 fused_ordering(706) 00:12:52.226 fused_ordering(707) 00:12:52.226 fused_ordering(708) 00:12:52.226 fused_ordering(709) 00:12:52.226 fused_ordering(710) 00:12:52.226 fused_ordering(711) 00:12:52.226 fused_ordering(712) 00:12:52.226 fused_ordering(713) 00:12:52.226 fused_ordering(714) 00:12:52.226 fused_ordering(715) 00:12:52.226 fused_ordering(716) 00:12:52.226 fused_ordering(717) 00:12:52.226 fused_ordering(718) 00:12:52.226 fused_ordering(719) 00:12:52.226 fused_ordering(720) 00:12:52.226 fused_ordering(721) 00:12:52.226 fused_ordering(722) 00:12:52.226 fused_ordering(723) 00:12:52.226 fused_ordering(724) 00:12:52.226 fused_ordering(725) 00:12:52.226 fused_ordering(726) 00:12:52.226 fused_ordering(727) 00:12:52.226 fused_ordering(728) 00:12:52.226 fused_ordering(729) 00:12:52.226 fused_ordering(730) 00:12:52.226 fused_ordering(731) 00:12:52.226 fused_ordering(732) 00:12:52.226 fused_ordering(733) 00:12:52.227 fused_ordering(734) 00:12:52.227 fused_ordering(735) 00:12:52.227 fused_ordering(736) 00:12:52.227 fused_ordering(737) 00:12:52.227 fused_ordering(738) 00:12:52.227 fused_ordering(739) 00:12:52.227 fused_ordering(740) 00:12:52.227 fused_ordering(741) 00:12:52.227 fused_ordering(742) 00:12:52.227 fused_ordering(743) 00:12:52.227 fused_ordering(744) 00:12:52.227 fused_ordering(745) 00:12:52.227 fused_ordering(746) 00:12:52.227 fused_ordering(747) 00:12:52.227 fused_ordering(748) 00:12:52.227 fused_ordering(749) 00:12:52.227 fused_ordering(750) 00:12:52.227 fused_ordering(751) 00:12:52.227 fused_ordering(752) 00:12:52.227 fused_ordering(753) 00:12:52.227 fused_ordering(754) 00:12:52.227 fused_ordering(755) 00:12:52.227 fused_ordering(756) 00:12:52.227 fused_ordering(757) 00:12:52.227 fused_ordering(758) 00:12:52.227 fused_ordering(759) 00:12:52.227 fused_ordering(760) 00:12:52.227 fused_ordering(761) 00:12:52.227 fused_ordering(762) 00:12:52.227 fused_ordering(763) 00:12:52.227 fused_ordering(764) 00:12:52.227 fused_ordering(765) 00:12:52.227 fused_ordering(766) 00:12:52.227 fused_ordering(767) 00:12:52.227 fused_ordering(768) 00:12:52.227 fused_ordering(769) 00:12:52.227 fused_ordering(770) 00:12:52.227 fused_ordering(771) 00:12:52.227 fused_ordering(772) 00:12:52.227 fused_ordering(773) 00:12:52.227 fused_ordering(774) 00:12:52.227 fused_ordering(775) 00:12:52.227 fused_ordering(776) 00:12:52.227 fused_ordering(777) 00:12:52.227 fused_ordering(778) 00:12:52.227 fused_ordering(779) 00:12:52.227 fused_ordering(780) 00:12:52.227 fused_ordering(781) 00:12:52.227 fused_ordering(782) 00:12:52.227 fused_ordering(783) 00:12:52.227 fused_ordering(784) 00:12:52.227 fused_ordering(785) 00:12:52.227 fused_ordering(786) 00:12:52.227 fused_ordering(787) 00:12:52.227 fused_ordering(788) 00:12:52.227 fused_ordering(789) 00:12:52.227 fused_ordering(790) 00:12:52.227 fused_ordering(791) 00:12:52.227 fused_ordering(792) 00:12:52.227 fused_ordering(793) 00:12:52.227 fused_ordering(794) 00:12:52.227 fused_ordering(795) 00:12:52.227 fused_ordering(796) 00:12:52.227 fused_ordering(797) 00:12:52.227 fused_ordering(798) 00:12:52.227 fused_ordering(799) 00:12:52.227 fused_ordering(800) 00:12:52.227 fused_ordering(801) 00:12:52.227 fused_ordering(802) 00:12:52.227 fused_ordering(803) 00:12:52.227 fused_ordering(804) 00:12:52.227 fused_ordering(805) 00:12:52.227 fused_ordering(806) 00:12:52.227 fused_ordering(807) 00:12:52.227 fused_ordering(808) 00:12:52.227 fused_ordering(809) 00:12:52.227 fused_ordering(810) 00:12:52.227 fused_ordering(811) 00:12:52.227 fused_ordering(812) 00:12:52.227 fused_ordering(813) 00:12:52.227 fused_ordering(814) 00:12:52.227 fused_ordering(815) 00:12:52.227 fused_ordering(816) 00:12:52.227 fused_ordering(817) 00:12:52.227 fused_ordering(818) 00:12:52.227 fused_ordering(819) 00:12:52.227 fused_ordering(820) 00:12:52.794 fused_ordering(821) 00:12:52.794 fused_ordering(822) 00:12:52.794 fused_ordering(823) 00:12:52.794 fused_ordering(824) 00:12:52.794 fused_ordering(825) 00:12:52.794 fused_ordering(826) 00:12:52.794 fused_ordering(827) 00:12:52.794 fused_ordering(828) 00:12:52.794 fused_ordering(829) 00:12:52.794 fused_ordering(830) 00:12:52.794 fused_ordering(831) 00:12:52.794 fused_ordering(832) 00:12:52.794 fused_ordering(833) 00:12:52.794 fused_ordering(834) 00:12:52.794 fused_ordering(835) 00:12:52.794 fused_ordering(836) 00:12:52.794 fused_ordering(837) 00:12:52.794 fused_ordering(838) 00:12:52.794 fused_ordering(839) 00:12:52.794 fused_ordering(840) 00:12:52.794 fused_ordering(841) 00:12:52.794 fused_ordering(842) 00:12:52.794 fused_ordering(843) 00:12:52.794 fused_ordering(844) 00:12:52.794 fused_ordering(845) 00:12:52.794 fused_ordering(846) 00:12:52.794 fused_ordering(847) 00:12:52.794 fused_ordering(848) 00:12:52.794 fused_ordering(849) 00:12:52.794 fused_ordering(850) 00:12:52.794 fused_ordering(851) 00:12:52.794 fused_ordering(852) 00:12:52.794 fused_ordering(853) 00:12:52.794 fused_ordering(854) 00:12:52.794 fused_ordering(855) 00:12:52.794 fused_ordering(856) 00:12:52.794 fused_ordering(857) 00:12:52.794 fused_ordering(858) 00:12:52.794 fused_ordering(859) 00:12:52.794 fused_ordering(860) 00:12:52.794 fused_ordering(861) 00:12:52.794 fused_ordering(862) 00:12:52.794 fused_ordering(863) 00:12:52.794 fused_ordering(864) 00:12:52.794 fused_ordering(865) 00:12:52.794 fused_ordering(866) 00:12:52.794 fused_ordering(867) 00:12:52.794 fused_ordering(868) 00:12:52.794 fused_ordering(869) 00:12:52.794 fused_ordering(870) 00:12:52.794 fused_ordering(871) 00:12:52.794 fused_ordering(872) 00:12:52.794 fused_ordering(873) 00:12:52.794 fused_ordering(874) 00:12:52.794 fused_ordering(875) 00:12:52.794 fused_ordering(876) 00:12:52.794 fused_ordering(877) 00:12:52.794 fused_ordering(878) 00:12:52.794 fused_ordering(879) 00:12:52.794 fused_ordering(880) 00:12:52.794 fused_ordering(881) 00:12:52.794 fused_ordering(882) 00:12:52.794 fused_ordering(883) 00:12:52.794 fused_ordering(884) 00:12:52.794 fused_ordering(885) 00:12:52.794 fused_ordering(886) 00:12:52.794 fused_ordering(887) 00:12:52.794 fused_ordering(888) 00:12:52.794 fused_ordering(889) 00:12:52.794 fused_ordering(890) 00:12:52.794 fused_ordering(891) 00:12:52.794 fused_ordering(892) 00:12:52.794 fused_ordering(893) 00:12:52.794 fused_ordering(894) 00:12:52.794 fused_ordering(895) 00:12:52.794 fused_ordering(896) 00:12:52.794 fused_ordering(897) 00:12:52.794 fused_ordering(898) 00:12:52.794 fused_ordering(899) 00:12:52.794 fused_ordering(900) 00:12:52.794 fused_ordering(901) 00:12:52.794 fused_ordering(902) 00:12:52.794 fused_ordering(903) 00:12:52.794 fused_ordering(904) 00:12:52.794 fused_ordering(905) 00:12:52.794 fused_ordering(906) 00:12:52.794 fused_ordering(907) 00:12:52.794 fused_ordering(908) 00:12:52.794 fused_ordering(909) 00:12:52.794 fused_ordering(910) 00:12:52.794 fused_ordering(911) 00:12:52.794 fused_ordering(912) 00:12:52.794 fused_ordering(913) 00:12:52.794 fused_ordering(914) 00:12:52.794 fused_ordering(915) 00:12:52.794 fused_ordering(916) 00:12:52.794 fused_ordering(917) 00:12:52.794 fused_ordering(918) 00:12:52.794 fused_ordering(919) 00:12:52.794 fused_ordering(920) 00:12:52.794 fused_ordering(921) 00:12:52.794 fused_ordering(922) 00:12:52.794 fused_ordering(923) 00:12:52.794 fused_ordering(924) 00:12:52.794 fused_ordering(925) 00:12:52.794 fused_ordering(926) 00:12:52.794 fused_ordering(927) 00:12:52.794 fused_ordering(928) 00:12:52.794 fused_ordering(929) 00:12:52.794 fused_ordering(930) 00:12:52.794 fused_ordering(931) 00:12:52.794 fused_ordering(932) 00:12:52.794 fused_ordering(933) 00:12:52.794 fused_ordering(934) 00:12:52.794 fused_ordering(935) 00:12:52.794 fused_ordering(936) 00:12:52.794 fused_ordering(937) 00:12:52.794 fused_ordering(938) 00:12:52.794 fused_ordering(939) 00:12:52.794 fused_ordering(940) 00:12:52.794 fused_ordering(941) 00:12:52.794 fused_ordering(942) 00:12:52.794 fused_ordering(943) 00:12:52.794 fused_ordering(944) 00:12:52.794 fused_ordering(945) 00:12:52.794 fused_ordering(946) 00:12:52.794 fused_ordering(947) 00:12:52.794 fused_ordering(948) 00:12:52.794 fused_ordering(949) 00:12:52.794 fused_ordering(950) 00:12:52.794 fused_ordering(951) 00:12:52.794 fused_ordering(952) 00:12:52.794 fused_ordering(953) 00:12:52.794 fused_ordering(954) 00:12:52.794 fused_ordering(955) 00:12:52.794 fused_ordering(956) 00:12:52.794 fused_ordering(957) 00:12:52.794 fused_ordering(958) 00:12:52.794 fused_ordering(959) 00:12:52.794 fused_ordering(960) 00:12:52.794 fused_ordering(961) 00:12:52.794 fused_ordering(962) 00:12:52.794 fused_ordering(963) 00:12:52.794 fused_ordering(964) 00:12:52.794 fused_ordering(965) 00:12:52.794 fused_ordering(966) 00:12:52.794 fused_ordering(967) 00:12:52.794 fused_ordering(968) 00:12:52.794 fused_ordering(969) 00:12:52.794 fused_ordering(970) 00:12:52.794 fused_ordering(971) 00:12:52.794 fused_ordering(972) 00:12:52.794 fused_ordering(973) 00:12:52.794 fused_ordering(974) 00:12:52.794 fused_ordering(975) 00:12:52.794 fused_ordering(976) 00:12:52.794 fused_ordering(977) 00:12:52.794 fused_ordering(978) 00:12:52.794 fused_ordering(979) 00:12:52.794 fused_ordering(980) 00:12:52.794 fused_ordering(981) 00:12:52.794 fused_ordering(982) 00:12:52.794 fused_ordering(983) 00:12:52.794 fused_ordering(984) 00:12:52.794 fused_ordering(985) 00:12:52.794 fused_ordering(986) 00:12:52.794 fused_ordering(987) 00:12:52.794 fused_ordering(988) 00:12:52.794 fused_ordering(989) 00:12:52.794 fused_ordering(990) 00:12:52.794 fused_ordering(991) 00:12:52.795 fused_ordering(992) 00:12:52.795 fused_ordering(993) 00:12:52.795 fused_ordering(994) 00:12:52.795 fused_ordering(995) 00:12:52.795 fused_ordering(996) 00:12:52.795 fused_ordering(997) 00:12:52.795 fused_ordering(998) 00:12:52.795 fused_ordering(999) 00:12:52.795 fused_ordering(1000) 00:12:52.795 fused_ordering(1001) 00:12:52.795 fused_ordering(1002) 00:12:52.795 fused_ordering(1003) 00:12:52.795 fused_ordering(1004) 00:12:52.795 fused_ordering(1005) 00:12:52.795 fused_ordering(1006) 00:12:52.795 fused_ordering(1007) 00:12:52.795 fused_ordering(1008) 00:12:52.795 fused_ordering(1009) 00:12:52.795 fused_ordering(1010) 00:12:52.795 fused_ordering(1011) 00:12:52.795 fused_ordering(1012) 00:12:52.795 fused_ordering(1013) 00:12:52.795 fused_ordering(1014) 00:12:52.795 fused_ordering(1015) 00:12:52.795 fused_ordering(1016) 00:12:52.795 fused_ordering(1017) 00:12:52.795 fused_ordering(1018) 00:12:52.795 fused_ordering(1019) 00:12:52.795 fused_ordering(1020) 00:12:52.795 fused_ordering(1021) 00:12:52.795 fused_ordering(1022) 00:12:52.795 fused_ordering(1023) 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.795 rmmod nvme_tcp 00:12:52.795 rmmod nvme_fabrics 00:12:52.795 rmmod nvme_keyring 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1849735 ']' 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1849735 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1849735 ']' 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1849735 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1849735 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1849735' 00:12:52.795 killing process with pid 1849735 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1849735 00:12:52.795 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1849735 00:12:53.053 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.053 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.053 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.053 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.053 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.053 09:46:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.053 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.053 09:46:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.591 09:46:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.591 00:12:55.591 real 0m8.005s 00:12:55.591 user 0m5.669s 00:12:55.591 sys 0m3.596s 00:12:55.591 09:46:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.591 09:46:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:55.591 ************************************ 00:12:55.591 END TEST nvmf_fused_ordering 00:12:55.591 ************************************ 00:12:55.591 09:46:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:55.591 09:46:11 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:55.591 09:46:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:55.591 09:46:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.591 09:46:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.591 ************************************ 00:12:55.591 START TEST nvmf_delete_subsystem 00:12:55.591 ************************************ 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:55.591 * Looking for test storage... 00:12:55.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.591 09:46:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:57.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:57.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:57.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:57.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.497 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:12:57.498 00:12:57.498 --- 10.0.0.2 ping statistics --- 00:12:57.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.498 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:12:57.498 00:12:57.498 --- 10.0.0.1 ping statistics --- 00:12:57.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.498 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1852080 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1852080 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1852080 ']' 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.498 09:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:57.498 [2024-07-15 09:46:14.030678] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:57.498 [2024-07-15 09:46:14.030780] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.498 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.498 [2024-07-15 09:46:14.068596] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:57.498 [2024-07-15 09:46:14.100624] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:57.498 [2024-07-15 09:46:14.190123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.498 [2024-07-15 09:46:14.190187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.498 [2024-07-15 09:46:14.190213] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.498 [2024-07-15 09:46:14.190227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.498 [2024-07-15 09:46:14.190239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.498 [2024-07-15 09:46:14.190341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.498 [2024-07-15 09:46:14.190348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:57.758 [2024-07-15 09:46:14.344402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:57.758 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:57.759 [2024-07-15 09:46:14.360623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:57.759 NULL1 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:57.759 Delay0 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1852101 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:57.759 09:46:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:57.759 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.759 [2024-07-15 09:46:14.435314] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:59.708 09:46:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.708 09:46:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.708 09:46:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 Write completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.967 starting I/O failed: -6 00:12:59.967 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 starting I/O failed: -6 00:12:59.968 starting I/O failed: -6 00:12:59.968 starting I/O failed: -6 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 starting I/O failed: -6 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 [2024-07-15 09:46:16.646454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb8c000d2f0 is same with the state(5) to be set 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Write completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:12:59.968 Read completed with error (sct=0, sc=8) 00:13:00.905 [2024-07-15 09:46:17.611846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5bb40 is same with the state(5) to be set 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 [2024-07-15 09:46:17.645097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb8c000d600 is same with the state(5) to be set 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 [2024-07-15 09:46:17.645325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb8c000cfe0 is same with the state(5) to be set 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Write completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.905 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 [2024-07-15 09:46:17.650303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3e100 is same with the state(5) to be set 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Write completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 Read completed with error (sct=0, sc=8) 00:13:00.906 [2024-07-15 09:46:17.650577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3dd40 is same with the state(5) to be set 00:13:00.906 Initializing NVMe Controllers 00:13:00.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:00.906 Controller IO queue size 128, less than required. 00:13:00.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:00.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:00.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:00.906 Initialization complete. Launching workers. 00:13:00.906 ======================================================== 00:13:00.906 Latency(us) 00:13:00.906 Device Information : IOPS MiB/s Average min max 00:13:00.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.33 0.09 901769.11 499.21 1044331.94 00:13:00.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 143.60 0.07 994932.56 375.99 1999667.45 00:13:00.906 ======================================================== 00:13:00.906 Total : 330.94 0.16 942195.89 375.99 1999667.45 00:13:00.906 00:13:00.906 [2024-07-15 09:46:17.651374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5bb40 (9): Bad file descriptor 00:13:00.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:00.906 09:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.906 09:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:00.906 09:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1852101 00:13:00.906 09:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:01.474 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:01.474 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1852101 00:13:01.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1852101) - No such process 00:13:01.474 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1852101 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1852101 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1852101 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:01.475 [2024-07-15 09:46:18.170146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1852627 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852627 00:13:01.475 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:01.475 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.475 [2024-07-15 09:46:18.227434] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:02.042 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:02.042 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852627 00:13:02.042 09:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:02.610 09:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:02.610 09:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852627 00:13:02.610 09:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:03.177 09:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:03.177 09:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852627 00:13:03.177 09:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:03.437 09:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:03.437 09:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852627 00:13:03.437 09:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:04.005 09:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:04.005 09:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852627 00:13:04.005 09:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:04.572 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:04.572 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852627 00:13:04.572 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:04.831 Initializing NVMe Controllers 00:13:04.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:04.831 Controller IO queue size 128, less than required. 00:13:04.831 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:04.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:04.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:04.831 Initialization complete. Launching workers. 00:13:04.831 ======================================================== 00:13:04.831 Latency(us) 00:13:04.831 Device Information : IOPS MiB/s Average min max 00:13:04.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003821.09 1000204.34 1042174.03 00:13:04.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004801.74 1000236.86 1013174.13 00:13:04.831 ======================================================== 00:13:04.831 Total : 256.00 0.12 1004311.41 1000204.34 1042174.03 00:13:04.831 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852627 00:13:05.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1852627) - No such process 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1852627 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.090 rmmod nvme_tcp 00:13:05.090 rmmod nvme_fabrics 00:13:05.090 rmmod nvme_keyring 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1852080 ']' 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1852080 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1852080 ']' 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1852080 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1852080 00:13:05.090 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:05.091 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:05.091 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1852080' 00:13:05.091 killing process with pid 1852080 00:13:05.091 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1852080 00:13:05.091 09:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1852080 00:13:05.349 09:46:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:05.349 09:46:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:05.349 09:46:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:05.349 09:46:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.349 09:46:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.349 09:46:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.349 09:46:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.349 09:46:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.885 09:46:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:07.885 00:13:07.885 real 0m12.236s 00:13:07.885 user 0m27.889s 00:13:07.885 sys 0m2.908s 00:13:07.885 09:46:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:07.885 09:46:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:07.885 ************************************ 00:13:07.885 END TEST nvmf_delete_subsystem 00:13:07.885 ************************************ 00:13:07.885 09:46:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:07.885 09:46:24 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:07.885 09:46:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:07.885 09:46:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.885 09:46:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:07.885 ************************************ 00:13:07.885 START TEST nvmf_ns_masking 00:13:07.885 ************************************ 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:07.885 * Looking for test storage... 00:13:07.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=390e1407-6c50-4d9a-be1f-302a78988ce3 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=34b669ca-6f72-46fe-9b8a-e5cfe212f611 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5f35612c-2316-4f8b-b506-d084a8e15195 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.885 09:46:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.788 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:09.789 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:09.789 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:09.789 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:09.789 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:13:09.789 00:13:09.789 --- 10.0.0.2 ping statistics --- 00:13:09.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.789 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:13:09.789 00:13:09.789 --- 10.0.0.1 ping statistics --- 00:13:09.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.789 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1854965 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1854965 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1854965 ']' 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.789 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.789 [2024-07-15 09:46:26.363971] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:09.789 [2024-07-15 09:46:26.364056] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.789 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.789 [2024-07-15 09:46:26.405758] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:09.789 [2024-07-15 09:46:26.436338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.789 [2024-07-15 09:46:26.529015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.789 [2024-07-15 09:46:26.529076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.789 [2024-07-15 09:46:26.529089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.789 [2024-07-15 09:46:26.529100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.789 [2024-07-15 09:46:26.529110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.789 [2024-07-15 09:46:26.529150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.047 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.047 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:10.047 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.047 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.047 09:46:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:10.047 09:46:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.048 09:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:10.305 [2024-07-15 09:46:26.946385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.305 09:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:10.305 09:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:10.305 09:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:10.564 Malloc1 00:13:10.564 09:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:10.822 Malloc2 00:13:10.822 09:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:11.389 09:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:11.389 09:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.646 [2024-07-15 09:46:28.390550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.646 09:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:11.646 09:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5f35612c-2316-4f8b-b506-d084a8e15195 -a 10.0.0.2 -s 4420 -i 4 00:13:11.906 09:46:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.906 09:46:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:11.906 09:46:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.906 09:46:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:11.906 09:46:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.810 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:14.069 [ 0]:0x1 00:13:14.069 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:14.069 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.069 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04dafa172b5f4065a59934c80bb21303 00:13:14.069 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04dafa172b5f4065a59934c80bb21303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.069 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:14.327 [ 0]:0x1 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04dafa172b5f4065a59934c80bb21303 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04dafa172b5f4065a59934c80bb21303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:14.327 [ 1]:0x2 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f1f5bd58816408d8dbfbe159d44e129 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f1f5bd58816408d8dbfbe159d44e129 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:14.327 09:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.327 09:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.622 09:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:14.881 09:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:14.881 09:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5f35612c-2316-4f8b-b506-d084a8e15195 -a 10.0.0.2 -s 4420 -i 4 00:13:15.138 09:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:15.138 09:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.138 09:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.138 09:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:15.138 09:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:15.138 09:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.032 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.289 [ 0]:0x2 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f1f5bd58816408d8dbfbe159d44e129 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f1f5bd58816408d8dbfbe159d44e129 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.289 09:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.547 [ 0]:0x1 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04dafa172b5f4065a59934c80bb21303 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04dafa172b5f4065a59934c80bb21303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.547 [ 1]:0x2 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f1f5bd58816408d8dbfbe159d44e129 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f1f5bd58816408d8dbfbe159d44e129 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.547 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.805 [ 0]:0x2 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f1f5bd58816408d8dbfbe159d44e129 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f1f5bd58816408d8dbfbe159d44e129 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:17.805 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.063 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:18.063 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:18.063 09:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5f35612c-2316-4f8b-b506-d084a8e15195 -a 10.0.0.2 -s 4420 -i 4 00:13:18.320 09:46:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:18.320 09:46:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.320 09:46:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.320 09:46:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:18.320 09:46:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:18.320 09:46:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:20.847 [ 0]:0x1 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04dafa172b5f4065a59934c80bb21303 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04dafa172b5f4065a59934c80bb21303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:20.847 [ 1]:0x2 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f1f5bd58816408d8dbfbe159d44e129 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f1f5bd58816408d8dbfbe159d44e129 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:20.847 [ 0]:0x2 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f1f5bd58816408d8dbfbe159d44e129 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f1f5bd58816408d8dbfbe159d44e129 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:20.847 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:21.105 [2024-07-15 09:46:37.771462] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:21.105 request: 00:13:21.105 { 00:13:21.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.105 "nsid": 2, 00:13:21.105 "host": "nqn.2016-06.io.spdk:host1", 00:13:21.105 "method": "nvmf_ns_remove_host", 00:13:21.105 "req_id": 1 00:13:21.105 } 00:13:21.105 Got JSON-RPC error response 00:13:21.105 response: 00:13:21.105 { 00:13:21.105 "code": -32602, 00:13:21.105 "message": "Invalid parameters" 00:13:21.105 } 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:21.105 [ 0]:0x2 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f1f5bd58816408d8dbfbe159d44e129 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f1f5bd58816408d8dbfbe159d44e129 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:21.105 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.364 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1856462 00:13:21.364 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:21.364 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.364 09:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1856462 /var/tmp/host.sock 00:13:21.364 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1856462 ']' 00:13:21.364 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:21.364 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.364 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:21.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:21.364 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.364 09:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:21.364 [2024-07-15 09:46:37.977759] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:21.364 [2024-07-15 09:46:37.977855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856462 ] 00:13:21.364 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.364 [2024-07-15 09:46:38.010732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:21.364 [2024-07-15 09:46:38.042533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.364 [2024-07-15 09:46:38.137326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.622 09:46:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.622 09:46:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:21.622 09:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.880 09:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.137 09:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 390e1407-6c50-4d9a-be1f-302a78988ce3 00:13:22.138 09:46:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:22.138 09:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 390E14076C504D9ABE1F302A78988CE3 -i 00:13:22.395 09:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 34b669ca-6f72-46fe-9b8a-e5cfe212f611 00:13:22.395 09:46:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:22.395 09:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 34B669CA6F7246FE9B8AE5CFE212F611 -i 00:13:22.653 09:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:22.911 09:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:23.168 09:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:23.169 09:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:23.735 nvme0n1 00:13:23.735 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:23.735 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:23.992 nvme1n2 00:13:23.992 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:23.992 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:23.993 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:23.993 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:23.993 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:24.250 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:24.250 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:24.250 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:24.250 09:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:24.507 09:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 390e1407-6c50-4d9a-be1f-302a78988ce3 == \3\9\0\e\1\4\0\7\-\6\c\5\0\-\4\d\9\a\-\b\e\1\f\-\3\0\2\a\7\8\9\8\8\c\e\3 ]] 00:13:24.507 09:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:24.507 09:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:24.507 09:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 34b669ca-6f72-46fe-9b8a-e5cfe212f611 == \3\4\b\6\6\9\c\a\-\6\f\7\2\-\4\6\f\e\-\9\b\8\a\-\e\5\c\f\e\2\1\2\f\6\1\1 ]] 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1856462 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1856462 ']' 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1856462 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1856462 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1856462' 00:13:24.765 killing process with pid 1856462 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1856462 00:13:24.765 09:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1856462 00:13:25.024 09:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.281 09:46:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:25.281 09:46:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:25.281 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.281 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:25.281 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.281 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:25.281 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.281 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.281 rmmod nvme_tcp 00:13:25.281 rmmod nvme_fabrics 00:13:25.281 rmmod nvme_keyring 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1854965 ']' 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1854965 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1854965 ']' 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1854965 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1854965 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1854965' 00:13:25.539 killing process with pid 1854965 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1854965 00:13:25.539 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1854965 00:13:25.798 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.798 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.798 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.798 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.798 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.798 09:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.798 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.798 09:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.697 09:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:27.697 00:13:27.697 real 0m20.342s 00:13:27.697 user 0m26.350s 00:13:27.697 sys 0m4.094s 00:13:27.697 09:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.697 09:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:27.697 ************************************ 00:13:27.697 END TEST nvmf_ns_masking 00:13:27.697 ************************************ 00:13:27.697 09:46:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:27.697 09:46:44 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:27.697 09:46:44 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:27.697 09:46:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.697 09:46:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.697 09:46:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.955 ************************************ 00:13:27.955 START TEST nvmf_nvme_cli 00:13:27.955 ************************************ 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:27.955 * Looking for test storage... 00:13:27.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.955 09:46:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.956 09:46:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:29.858 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.858 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:29.859 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:29.859 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:29.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:29.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:29.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:13:29.859 00:13:29.859 --- 10.0.0.2 ping statistics --- 00:13:29.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.859 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:13:29.859 00:13:29.859 --- 10.0.0.1 ping statistics --- 00:13:29.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.859 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.859 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.151 09:46:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:30.151 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.151 09:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:30.151 09:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.151 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1858945 00:13:30.151 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:30.151 09:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1858945 00:13:30.152 09:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1858945 ']' 00:13:30.152 09:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.152 09:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.152 09:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.152 09:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.152 09:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.152 [2024-07-15 09:46:46.702242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:30.152 [2024-07-15 09:46:46.702333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.152 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.152 [2024-07-15 09:46:46.750871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:30.152 [2024-07-15 09:46:46.782049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.152 [2024-07-15 09:46:46.882059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.152 [2024-07-15 09:46:46.882125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.152 [2024-07-15 09:46:46.882141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.152 [2024-07-15 09:46:46.882155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.152 [2024-07-15 09:46:46.882167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.152 [2024-07-15 09:46:46.882229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.152 [2024-07-15 09:46:46.882259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.152 [2024-07-15 09:46:46.882314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.152 [2024-07-15 09:46:46.882317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.410 [2024-07-15 09:46:47.033719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.410 Malloc0 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.410 Malloc1 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.410 [2024-07-15 09:46:47.118714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.410 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:30.667 00:13:30.667 Discovery Log Number of Records 2, Generation counter 2 00:13:30.667 =====Discovery Log Entry 0====== 00:13:30.667 trtype: tcp 00:13:30.667 adrfam: ipv4 00:13:30.667 subtype: current discovery subsystem 00:13:30.667 treq: not required 00:13:30.667 portid: 0 00:13:30.667 trsvcid: 4420 00:13:30.667 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:30.667 traddr: 10.0.0.2 00:13:30.667 eflags: explicit discovery connections, duplicate discovery information 00:13:30.667 sectype: none 00:13:30.667 =====Discovery Log Entry 1====== 00:13:30.667 trtype: tcp 00:13:30.667 adrfam: ipv4 00:13:30.667 subtype: nvme subsystem 00:13:30.667 treq: not required 00:13:30.667 portid: 0 00:13:30.667 trsvcid: 4420 00:13:30.667 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:30.667 traddr: 10.0.0.2 00:13:30.667 eflags: none 00:13:30.667 sectype: none 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:30.667 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.232 09:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:31.232 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:31.232 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.232 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:31.232 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:31.232 09:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:33.127 09:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:33.127 09:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:33.127 09:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.127 09:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:33.127 09:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.127 09:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:33.127 09:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:33.127 09:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:33.127 09:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.127 09:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:33.385 /dev/nvme0n1 ]] 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.385 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:33.643 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:33.902 rmmod nvme_tcp 00:13:33.902 rmmod nvme_fabrics 00:13:33.902 rmmod nvme_keyring 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1858945 ']' 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1858945 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1858945 ']' 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1858945 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1858945 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1858945' 00:13:33.902 killing process with pid 1858945 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1858945 00:13:33.902 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1858945 00:13:34.163 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.163 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.163 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.163 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.163 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.163 09:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.163 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.163 09:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.063 09:46:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.063 00:13:36.063 real 0m8.312s 00:13:36.063 user 0m15.945s 00:13:36.063 sys 0m2.167s 00:13:36.063 09:46:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:36.063 09:46:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.063 ************************************ 00:13:36.063 END TEST nvmf_nvme_cli 00:13:36.063 ************************************ 00:13:36.064 09:46:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:36.064 09:46:52 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:36.064 09:46:52 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:36.064 09:46:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:36.064 09:46:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.064 09:46:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.322 ************************************ 00:13:36.322 START TEST nvmf_vfio_user 00:13:36.322 ************************************ 00:13:36.322 09:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:36.322 * Looking for test storage... 00:13:36.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.322 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.322 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:36.322 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.322 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.322 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.322 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1859754 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1859754' 00:13:36.323 Process pid: 1859754 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1859754 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1859754 ']' 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.323 09:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:36.323 [2024-07-15 09:46:52.992798] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:36.323 [2024-07-15 09:46:52.992911] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.323 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.323 [2024-07-15 09:46:53.025391] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:36.323 [2024-07-15 09:46:53.051480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.582 [2024-07-15 09:46:53.138794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.582 [2024-07-15 09:46:53.138847] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.582 [2024-07-15 09:46:53.138887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.582 [2024-07-15 09:46:53.138898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.582 [2024-07-15 09:46:53.138908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.582 [2024-07-15 09:46:53.138991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.582 [2024-07-15 09:46:53.139050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.582 [2024-07-15 09:46:53.139079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.582 [2024-07-15 09:46:53.139081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.582 09:46:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.582 09:46:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:36.582 09:46:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:37.516 09:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:38.080 09:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:38.080 09:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:38.080 09:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:38.080 09:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:38.080 09:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:38.080 Malloc1 00:13:38.337 09:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:38.337 09:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:38.595 09:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:38.853 09:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:38.853 09:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:38.853 09:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:39.111 Malloc2 00:13:39.111 09:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:39.369 09:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:39.628 09:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:39.885 09:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:39.885 09:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:39.885 09:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:39.885 09:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:39.885 09:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:39.885 09:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:39.885 [2024-07-15 09:46:56.620713] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:39.885 [2024-07-15 09:46:56.620752] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1860181 ] 00:13:39.886 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.886 [2024-07-15 09:46:56.637621] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:39.886 [2024-07-15 09:46:56.655297] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:39.886 [2024-07-15 09:46:56.661337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:39.886 [2024-07-15 09:46:56.661369] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe9dacf8000 00:13:39.886 [2024-07-15 09:46:56.662328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.886 [2024-07-15 09:46:56.663325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.886 [2024-07-15 09:46:56.664330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.886 [2024-07-15 09:46:56.665331] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:39.886 [2024-07-15 09:46:56.666335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:39.886 [2024-07-15 09:46:56.667341] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:39.886 [2024-07-15 09:46:56.668347] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:39.886 [2024-07-15 09:46:56.669347] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.145 [2024-07-15 09:46:56.670356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:40.145 [2024-07-15 09:46:56.670376] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe9d9aba000 00:13:40.145 [2024-07-15 09:46:56.671581] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:40.145 [2024-07-15 09:46:56.685520] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:40.145 [2024-07-15 09:46:56.685554] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:40.145 [2024-07-15 09:46:56.694496] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:40.145 [2024-07-15 09:46:56.694548] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:40.145 [2024-07-15 09:46:56.694636] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:40.145 [2024-07-15 09:46:56.694664] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:40.145 [2024-07-15 09:46:56.694675] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:40.145 [2024-07-15 09:46:56.695491] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:40.145 [2024-07-15 09:46:56.695511] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:40.145 [2024-07-15 09:46:56.695523] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:40.145 [2024-07-15 09:46:56.696495] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:40.145 [2024-07-15 09:46:56.696513] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:40.145 [2024-07-15 09:46:56.696525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:40.145 [2024-07-15 09:46:56.697499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:40.145 [2024-07-15 09:46:56.697517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:40.145 [2024-07-15 09:46:56.698506] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:40.145 [2024-07-15 09:46:56.698525] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:40.145 [2024-07-15 09:46:56.698533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:40.145 [2024-07-15 09:46:56.698544] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:40.145 [2024-07-15 09:46:56.698653] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:40.145 [2024-07-15 09:46:56.698661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:40.145 [2024-07-15 09:46:56.698669] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:40.145 [2024-07-15 09:46:56.699511] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:40.145 [2024-07-15 09:46:56.700513] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:40.145 [2024-07-15 09:46:56.701519] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:40.145 [2024-07-15 09:46:56.702515] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:40.145 [2024-07-15 09:46:56.702623] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:40.145 [2024-07-15 09:46:56.703531] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:40.145 [2024-07-15 09:46:56.703548] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:40.145 [2024-07-15 09:46:56.703557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:40.145 [2024-07-15 09:46:56.703580] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:40.145 [2024-07-15 09:46:56.703593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:40.145 [2024-07-15 09:46:56.703616] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:40.145 [2024-07-15 09:46:56.703625] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.145 [2024-07-15 09:46:56.703644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.145 [2024-07-15 09:46:56.703695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:40.145 [2024-07-15 09:46:56.703711] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:40.145 [2024-07-15 09:46:56.703723] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:40.145 [2024-07-15 09:46:56.703730] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:40.145 [2024-07-15 09:46:56.703738] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:40.145 [2024-07-15 09:46:56.703745] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:40.145 [2024-07-15 09:46:56.703752] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:40.145 [2024-07-15 09:46:56.703760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:40.145 [2024-07-15 09:46:56.703772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:40.145 [2024-07-15 09:46:56.703786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:40.145 [2024-07-15 09:46:56.703800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:40.145 [2024-07-15 09:46:56.703819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.145 [2024-07-15 09:46:56.703833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.145 [2024-07-15 09:46:56.703844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.145 [2024-07-15 09:46:56.703856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.145 [2024-07-15 09:46:56.703885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:40.145 [2024-07-15 09:46:56.703902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:40.145 [2024-07-15 09:46:56.703927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:40.145 [2024-07-15 09:46:56.703940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.703950] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:40.146 [2024-07-15 09:46:56.703958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.703969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.703979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.703992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704099] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:40.146 [2024-07-15 09:46:56.704107] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:40.146 [2024-07-15 09:46:56.704116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704149] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:40.146 [2024-07-15 09:46:56.704164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704191] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:40.146 [2024-07-15 09:46:56.704213] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.146 [2024-07-15 09:46:56.704223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704295] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:40.146 [2024-07-15 09:46:56.704302] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.146 [2024-07-15 09:46:56.704311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704371] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704378] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704397] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:40.146 [2024-07-15 09:46:56.704405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:40.146 [2024-07-15 09:46:56.704413] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:40.146 [2024-07-15 09:46:56.704440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704566] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:40.146 [2024-07-15 09:46:56.704576] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:40.146 [2024-07-15 09:46:56.704582] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:40.146 [2024-07-15 09:46:56.704588] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:40.146 [2024-07-15 09:46:56.704597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:40.146 [2024-07-15 09:46:56.704608] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:40.146 [2024-07-15 09:46:56.704615] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:40.146 [2024-07-15 09:46:56.704624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704634] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:40.146 [2024-07-15 09:46:56.704642] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.146 [2024-07-15 09:46:56.704650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704662] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:40.146 [2024-07-15 09:46:56.704670] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:40.146 [2024-07-15 09:46:56.704678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:40.146 [2024-07-15 09:46:56.704689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:40.146 [2024-07-15 09:46:56.704740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:40.146 ===================================================== 00:13:40.146 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:40.146 ===================================================== 00:13:40.146 Controller Capabilities/Features 00:13:40.146 ================================ 00:13:40.146 Vendor ID: 4e58 00:13:40.146 Subsystem Vendor ID: 4e58 00:13:40.146 Serial Number: SPDK1 00:13:40.146 Model Number: SPDK bdev Controller 00:13:40.146 Firmware Version: 24.09 00:13:40.146 Recommended Arb Burst: 6 00:13:40.146 IEEE OUI Identifier: 8d 6b 50 00:13:40.146 Multi-path I/O 00:13:40.146 May have multiple subsystem ports: Yes 00:13:40.146 May have multiple controllers: Yes 00:13:40.146 Associated with SR-IOV VF: No 00:13:40.146 Max Data Transfer Size: 131072 00:13:40.146 Max Number of Namespaces: 32 00:13:40.146 Max Number of I/O Queues: 127 00:13:40.146 NVMe Specification Version (VS): 1.3 00:13:40.146 NVMe Specification Version (Identify): 1.3 00:13:40.146 Maximum Queue Entries: 256 00:13:40.146 Contiguous Queues Required: Yes 00:13:40.146 Arbitration Mechanisms Supported 00:13:40.146 Weighted Round Robin: Not Supported 00:13:40.146 Vendor Specific: Not Supported 00:13:40.146 Reset Timeout: 15000 ms 00:13:40.146 Doorbell Stride: 4 bytes 00:13:40.146 NVM Subsystem Reset: Not Supported 00:13:40.146 Command Sets Supported 00:13:40.146 NVM Command Set: Supported 00:13:40.146 Boot Partition: Not Supported 00:13:40.146 Memory Page Size Minimum: 4096 bytes 00:13:40.146 Memory Page Size Maximum: 4096 bytes 00:13:40.146 Persistent Memory Region: Not Supported 00:13:40.146 Optional Asynchronous Events Supported 00:13:40.146 Namespace Attribute Notices: Supported 00:13:40.146 Firmware Activation Notices: Not Supported 00:13:40.146 ANA Change Notices: Not Supported 00:13:40.146 PLE Aggregate Log Change Notices: Not Supported 00:13:40.146 LBA Status Info Alert Notices: Not Supported 00:13:40.146 EGE Aggregate Log Change Notices: Not Supported 00:13:40.146 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.146 Zone Descriptor Change Notices: Not Supported 00:13:40.146 Discovery Log Change Notices: Not Supported 00:13:40.146 Controller Attributes 00:13:40.146 128-bit Host Identifier: Supported 00:13:40.146 Non-Operational Permissive Mode: Not Supported 00:13:40.146 NVM Sets: Not Supported 00:13:40.146 Read Recovery Levels: Not Supported 00:13:40.146 Endurance Groups: Not Supported 00:13:40.146 Predictable Latency Mode: Not Supported 00:13:40.146 Traffic Based Keep ALive: Not Supported 00:13:40.146 Namespace Granularity: Not Supported 00:13:40.146 SQ Associations: Not Supported 00:13:40.146 UUID List: Not Supported 00:13:40.146 Multi-Domain Subsystem: Not Supported 00:13:40.146 Fixed Capacity Management: Not Supported 00:13:40.146 Variable Capacity Management: Not Supported 00:13:40.146 Delete Endurance Group: Not Supported 00:13:40.146 Delete NVM Set: Not Supported 00:13:40.146 Extended LBA Formats Supported: Not Supported 00:13:40.147 Flexible Data Placement Supported: Not Supported 00:13:40.147 00:13:40.147 Controller Memory Buffer Support 00:13:40.147 ================================ 00:13:40.147 Supported: No 00:13:40.147 00:13:40.147 Persistent Memory Region Support 00:13:40.147 ================================ 00:13:40.147 Supported: No 00:13:40.147 00:13:40.147 Admin Command Set Attributes 00:13:40.147 ============================ 00:13:40.147 Security Send/Receive: Not Supported 00:13:40.147 Format NVM: Not Supported 00:13:40.147 Firmware Activate/Download: Not Supported 00:13:40.147 Namespace Management: Not Supported 00:13:40.147 Device Self-Test: Not Supported 00:13:40.147 Directives: Not Supported 00:13:40.147 NVMe-MI: Not Supported 00:13:40.147 Virtualization Management: Not Supported 00:13:40.147 Doorbell Buffer Config: Not Supported 00:13:40.147 Get LBA Status Capability: Not Supported 00:13:40.147 Command & Feature Lockdown Capability: Not Supported 00:13:40.147 Abort Command Limit: 4 00:13:40.147 Async Event Request Limit: 4 00:13:40.147 Number of Firmware Slots: N/A 00:13:40.147 Firmware Slot 1 Read-Only: N/A 00:13:40.147 Firmware Activation Without Reset: N/A 00:13:40.147 Multiple Update Detection Support: N/A 00:13:40.147 Firmware Update Granularity: No Information Provided 00:13:40.147 Per-Namespace SMART Log: No 00:13:40.147 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.147 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:40.147 Command Effects Log Page: Supported 00:13:40.147 Get Log Page Extended Data: Supported 00:13:40.147 Telemetry Log Pages: Not Supported 00:13:40.147 Persistent Event Log Pages: Not Supported 00:13:40.147 Supported Log Pages Log Page: May Support 00:13:40.147 Commands Supported & Effects Log Page: Not Supported 00:13:40.147 Feature Identifiers & Effects Log Page:May Support 00:13:40.147 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.147 Data Area 4 for Telemetry Log: Not Supported 00:13:40.147 Error Log Page Entries Supported: 128 00:13:40.147 Keep Alive: Supported 00:13:40.147 Keep Alive Granularity: 10000 ms 00:13:40.147 00:13:40.147 NVM Command Set Attributes 00:13:40.147 ========================== 00:13:40.147 Submission Queue Entry Size 00:13:40.147 Max: 64 00:13:40.147 Min: 64 00:13:40.147 Completion Queue Entry Size 00:13:40.147 Max: 16 00:13:40.147 Min: 16 00:13:40.147 Number of Namespaces: 32 00:13:40.147 Compare Command: Supported 00:13:40.147 Write Uncorrectable Command: Not Supported 00:13:40.147 Dataset Management Command: Supported 00:13:40.147 Write Zeroes Command: Supported 00:13:40.147 Set Features Save Field: Not Supported 00:13:40.147 Reservations: Not Supported 00:13:40.147 Timestamp: Not Supported 00:13:40.147 Copy: Supported 00:13:40.147 Volatile Write Cache: Present 00:13:40.147 Atomic Write Unit (Normal): 1 00:13:40.147 Atomic Write Unit (PFail): 1 00:13:40.147 Atomic Compare & Write Unit: 1 00:13:40.147 Fused Compare & Write: Supported 00:13:40.147 Scatter-Gather List 00:13:40.147 SGL Command Set: Supported (Dword aligned) 00:13:40.147 SGL Keyed: Not Supported 00:13:40.147 SGL Bit Bucket Descriptor: Not Supported 00:13:40.147 SGL Metadata Pointer: Not Supported 00:13:40.147 Oversized SGL: Not Supported 00:13:40.147 SGL Metadata Address: Not Supported 00:13:40.147 SGL Offset: Not Supported 00:13:40.147 Transport SGL Data Block: Not Supported 00:13:40.147 Replay Protected Memory Block: Not Supported 00:13:40.147 00:13:40.147 Firmware Slot Information 00:13:40.147 ========================= 00:13:40.147 Active slot: 1 00:13:40.147 Slot 1 Firmware Revision: 24.09 00:13:40.147 00:13:40.147 00:13:40.147 Commands Supported and Effects 00:13:40.147 ============================== 00:13:40.147 Admin Commands 00:13:40.147 -------------- 00:13:40.147 Get Log Page (02h): Supported 00:13:40.147 Identify (06h): Supported 00:13:40.147 Abort (08h): Supported 00:13:40.147 Set Features (09h): Supported 00:13:40.147 Get Features (0Ah): Supported 00:13:40.147 Asynchronous Event Request (0Ch): Supported 00:13:40.147 Keep Alive (18h): Supported 00:13:40.147 I/O Commands 00:13:40.147 ------------ 00:13:40.147 Flush (00h): Supported LBA-Change 00:13:40.147 Write (01h): Supported LBA-Change 00:13:40.147 Read (02h): Supported 00:13:40.147 Compare (05h): Supported 00:13:40.147 Write Zeroes (08h): Supported LBA-Change 00:13:40.147 Dataset Management (09h): Supported LBA-Change 00:13:40.147 Copy (19h): Supported LBA-Change 00:13:40.147 00:13:40.147 Error Log 00:13:40.147 ========= 00:13:40.147 00:13:40.147 Arbitration 00:13:40.147 =========== 00:13:40.147 Arbitration Burst: 1 00:13:40.147 00:13:40.147 Power Management 00:13:40.147 ================ 00:13:40.147 Number of Power States: 1 00:13:40.147 Current Power State: Power State #0 00:13:40.147 Power State #0: 00:13:40.147 Max Power: 0.00 W 00:13:40.147 Non-Operational State: Operational 00:13:40.147 Entry Latency: Not Reported 00:13:40.147 Exit Latency: Not Reported 00:13:40.147 Relative Read Throughput: 0 00:13:40.147 Relative Read Latency: 0 00:13:40.147 Relative Write Throughput: 0 00:13:40.147 Relative Write Latency: 0 00:13:40.147 Idle Power: Not Reported 00:13:40.147 Active Power: Not Reported 00:13:40.147 Non-Operational Permissive Mode: Not Supported 00:13:40.147 00:13:40.147 Health Information 00:13:40.147 ================== 00:13:40.147 Critical Warnings: 00:13:40.147 Available Spare Space: OK 00:13:40.147 Temperature: OK 00:13:40.147 Device Reliability: OK 00:13:40.147 Read Only: No 00:13:40.147 Volatile Memory Backup: OK 00:13:40.147 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:40.147 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:40.147 Available Spare: 0% 00:13:40.147 Available Sp[2024-07-15 09:46:56.704889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:40.147 [2024-07-15 09:46:56.704907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:40.147 [2024-07-15 09:46:56.704957] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:40.147 [2024-07-15 09:46:56.704975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.147 [2024-07-15 09:46:56.704986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.147 [2024-07-15 09:46:56.704996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.147 [2024-07-15 09:46:56.705006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.147 [2024-07-15 09:46:56.705547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:40.147 [2024-07-15 09:46:56.705565] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:40.147 [2024-07-15 09:46:56.706547] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:40.147 [2024-07-15 09:46:56.706637] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:40.147 [2024-07-15 09:46:56.706652] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:40.147 [2024-07-15 09:46:56.707557] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:40.147 [2024-07-15 09:46:56.707580] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:40.147 [2024-07-15 09:46:56.707633] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:40.147 [2024-07-15 09:46:56.712888] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:40.147 are Threshold: 0% 00:13:40.147 Life Percentage Used: 0% 00:13:40.147 Data Units Read: 0 00:13:40.147 Data Units Written: 0 00:13:40.147 Host Read Commands: 0 00:13:40.147 Host Write Commands: 0 00:13:40.147 Controller Busy Time: 0 minutes 00:13:40.147 Power Cycles: 0 00:13:40.147 Power On Hours: 0 hours 00:13:40.147 Unsafe Shutdowns: 0 00:13:40.147 Unrecoverable Media Errors: 0 00:13:40.147 Lifetime Error Log Entries: 0 00:13:40.147 Warning Temperature Time: 0 minutes 00:13:40.147 Critical Temperature Time: 0 minutes 00:13:40.147 00:13:40.147 Number of Queues 00:13:40.147 ================ 00:13:40.147 Number of I/O Submission Queues: 127 00:13:40.147 Number of I/O Completion Queues: 127 00:13:40.147 00:13:40.148 Active Namespaces 00:13:40.148 ================= 00:13:40.148 Namespace ID:1 00:13:40.148 Error Recovery Timeout: Unlimited 00:13:40.148 Command Set Identifier: NVM (00h) 00:13:40.148 Deallocate: Supported 00:13:40.148 Deallocated/Unwritten Error: Not Supported 00:13:40.148 Deallocated Read Value: Unknown 00:13:40.148 Deallocate in Write Zeroes: Not Supported 00:13:40.148 Deallocated Guard Field: 0xFFFF 00:13:40.148 Flush: Supported 00:13:40.148 Reservation: Supported 00:13:40.148 Namespace Sharing Capabilities: Multiple Controllers 00:13:40.148 Size (in LBAs): 131072 (0GiB) 00:13:40.148 Capacity (in LBAs): 131072 (0GiB) 00:13:40.148 Utilization (in LBAs): 131072 (0GiB) 00:13:40.148 NGUID: 7A707BA1D01A4116BA74823A4EA58C6B 00:13:40.148 UUID: 7a707ba1-d01a-4116-ba74-823a4ea58c6b 00:13:40.148 Thin Provisioning: Not Supported 00:13:40.148 Per-NS Atomic Units: Yes 00:13:40.148 Atomic Boundary Size (Normal): 0 00:13:40.148 Atomic Boundary Size (PFail): 0 00:13:40.148 Atomic Boundary Offset: 0 00:13:40.148 Maximum Single Source Range Length: 65535 00:13:40.148 Maximum Copy Length: 65535 00:13:40.148 Maximum Source Range Count: 1 00:13:40.148 NGUID/EUI64 Never Reused: No 00:13:40.148 Namespace Write Protected: No 00:13:40.148 Number of LBA Formats: 1 00:13:40.148 Current LBA Format: LBA Format #00 00:13:40.148 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.148 00:13:40.148 09:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:40.148 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.405 [2024-07-15 09:46:56.944770] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:45.663 Initializing NVMe Controllers 00:13:45.663 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:45.663 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:45.663 Initialization complete. Launching workers. 00:13:45.663 ======================================================== 00:13:45.663 Latency(us) 00:13:45.663 Device Information : IOPS MiB/s Average min max 00:13:45.663 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34321.85 134.07 3728.83 1157.27 11615.19 00:13:45.663 ======================================================== 00:13:45.663 Total : 34321.85 134.07 3728.83 1157.27 11615.19 00:13:45.663 00:13:45.663 [2024-07-15 09:47:01.964942] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.663 09:47:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:45.663 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.663 [2024-07-15 09:47:02.200093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.974 Initializing NVMe Controllers 00:13:50.974 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:50.974 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:50.974 Initialization complete. Launching workers. 00:13:50.974 ======================================================== 00:13:50.974 Latency(us) 00:13:50.974 Device Information : IOPS MiB/s Average min max 00:13:50.974 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8014.82 4972.82 15974.58 00:13:50.974 ======================================================== 00:13:50.974 Total : 16000.00 62.50 8014.82 4972.82 15974.58 00:13:50.974 00:13:50.974 [2024-07-15 09:47:07.237624] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.974 09:47:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:50.974 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.974 [2024-07-15 09:47:07.459721] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:56.239 [2024-07-15 09:47:12.540309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:56.239 Initializing NVMe Controllers 00:13:56.239 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:56.239 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:56.239 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:56.239 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:56.239 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:56.239 Initialization complete. Launching workers. 00:13:56.239 Starting thread on core 2 00:13:56.239 Starting thread on core 3 00:13:56.239 Starting thread on core 1 00:13:56.239 09:47:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:56.239 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.239 [2024-07-15 09:47:12.837385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:59.524 [2024-07-15 09:47:15.906274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:59.524 Initializing NVMe Controllers 00:13:59.524 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.524 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.524 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:59.524 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:59.524 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:59.524 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:59.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:59.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:59.524 Initialization complete. Launching workers. 00:13:59.524 Starting thread on core 1 with urgent priority queue 00:13:59.524 Starting thread on core 2 with urgent priority queue 00:13:59.524 Starting thread on core 3 with urgent priority queue 00:13:59.524 Starting thread on core 0 with urgent priority queue 00:13:59.524 SPDK bdev Controller (SPDK1 ) core 0: 5723.33 IO/s 17.47 secs/100000 ios 00:13:59.524 SPDK bdev Controller (SPDK1 ) core 1: 5735.33 IO/s 17.44 secs/100000 ios 00:13:59.524 SPDK bdev Controller (SPDK1 ) core 2: 5910.33 IO/s 16.92 secs/100000 ios 00:13:59.524 SPDK bdev Controller (SPDK1 ) core 3: 5225.00 IO/s 19.14 secs/100000 ios 00:13:59.524 ======================================================== 00:13:59.524 00:13:59.525 09:47:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:59.525 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.525 [2024-07-15 09:47:16.201008] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:59.525 Initializing NVMe Controllers 00:13:59.525 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.525 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.525 Namespace ID: 1 size: 0GB 00:13:59.525 Initialization complete. 00:13:59.525 INFO: using host memory buffer for IO 00:13:59.525 Hello world! 00:13:59.525 [2024-07-15 09:47:16.236559] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:59.525 09:47:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:59.784 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.784 [2024-07-15 09:47:16.527276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:01.165 Initializing NVMe Controllers 00:14:01.165 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:01.165 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:01.165 Initialization complete. Launching workers. 00:14:01.165 submit (in ns) avg, min, max = 7939.4, 3502.2, 4018774.4 00:14:01.165 complete (in ns) avg, min, max = 27598.7, 2072.2, 4015161.1 00:14:01.165 00:14:01.165 Submit histogram 00:14:01.165 ================ 00:14:01.165 Range in us Cumulative Count 00:14:01.165 3.484 - 3.508: 0.0528% ( 7) 00:14:01.165 3.508 - 3.532: 0.6488% ( 79) 00:14:01.165 3.532 - 3.556: 1.9615% ( 174) 00:14:01.165 3.556 - 3.579: 5.0170% ( 405) 00:14:01.165 3.579 - 3.603: 10.5243% ( 730) 00:14:01.165 3.603 - 3.627: 17.2690% ( 894) 00:14:01.165 3.627 - 3.650: 24.6775% ( 982) 00:14:01.165 3.650 - 3.674: 31.9728% ( 967) 00:14:01.165 3.674 - 3.698: 39.4191% ( 987) 00:14:01.165 3.698 - 3.721: 46.7899% ( 977) 00:14:01.165 3.721 - 3.745: 52.1916% ( 716) 00:14:01.165 3.745 - 3.769: 56.8389% ( 616) 00:14:01.165 3.769 - 3.793: 60.8902% ( 537) 00:14:01.165 3.793 - 3.816: 64.5945% ( 491) 00:14:01.165 3.816 - 3.840: 68.0045% ( 452) 00:14:01.165 3.840 - 3.864: 71.9879% ( 528) 00:14:01.165 3.864 - 3.887: 75.6997% ( 492) 00:14:01.165 3.887 - 3.911: 79.4342% ( 495) 00:14:01.165 3.911 - 3.935: 82.9272% ( 463) 00:14:01.165 3.935 - 3.959: 85.1528% ( 295) 00:14:01.165 3.959 - 3.982: 87.1747% ( 268) 00:14:01.165 3.982 - 4.006: 88.7439% ( 208) 00:14:01.165 4.006 - 4.030: 90.0943% ( 179) 00:14:01.165 4.030 - 4.053: 91.2863% ( 158) 00:14:01.165 4.053 - 4.077: 92.4104% ( 149) 00:14:01.165 4.077 - 4.101: 93.3535% ( 125) 00:14:01.165 4.101 - 4.124: 94.1079% ( 100) 00:14:01.165 4.124 - 4.148: 94.8171% ( 94) 00:14:01.165 4.148 - 4.172: 95.2320% ( 55) 00:14:01.165 4.172 - 4.196: 95.4508% ( 29) 00:14:01.165 4.196 - 4.219: 95.7676% ( 42) 00:14:01.165 4.219 - 4.243: 95.9336% ( 22) 00:14:01.165 4.243 - 4.267: 96.1147% ( 24) 00:14:01.165 4.267 - 4.290: 96.2580% ( 19) 00:14:01.165 4.290 - 4.314: 96.4089% ( 20) 00:14:01.165 4.314 - 4.338: 96.5447% ( 18) 00:14:01.165 4.338 - 4.361: 96.6051% ( 8) 00:14:01.165 4.361 - 4.385: 96.6579% ( 7) 00:14:01.165 4.385 - 4.409: 96.6880% ( 4) 00:14:01.165 4.409 - 4.433: 96.7710% ( 11) 00:14:01.165 4.433 - 4.456: 96.8012% ( 4) 00:14:01.165 4.456 - 4.480: 96.8163% ( 2) 00:14:01.165 4.480 - 4.504: 96.8465% ( 4) 00:14:01.165 4.504 - 4.527: 96.8616% ( 2) 00:14:01.165 4.527 - 4.551: 96.8917% ( 4) 00:14:01.165 4.551 - 4.575: 96.9068% ( 2) 00:14:01.165 4.599 - 4.622: 96.9144% ( 1) 00:14:01.165 4.622 - 4.646: 96.9219% ( 1) 00:14:01.165 4.646 - 4.670: 96.9370% ( 2) 00:14:01.165 4.670 - 4.693: 96.9445% ( 1) 00:14:01.165 4.717 - 4.741: 96.9521% ( 1) 00:14:01.165 4.741 - 4.764: 96.9672% ( 2) 00:14:01.165 4.764 - 4.788: 96.9974% ( 4) 00:14:01.165 4.788 - 4.812: 97.0351% ( 5) 00:14:01.165 4.812 - 4.836: 97.0803% ( 6) 00:14:01.165 4.836 - 4.859: 97.1181% ( 5) 00:14:01.165 4.859 - 4.883: 97.1633% ( 6) 00:14:01.165 4.883 - 4.907: 97.2388% ( 10) 00:14:01.165 4.907 - 4.930: 97.2840% ( 6) 00:14:01.165 4.930 - 4.954: 97.3142% ( 4) 00:14:01.165 4.954 - 4.978: 97.3670% ( 7) 00:14:01.165 4.978 - 5.001: 97.4123% ( 6) 00:14:01.165 5.001 - 5.025: 97.4198% ( 1) 00:14:01.165 5.025 - 5.049: 97.4500% ( 4) 00:14:01.165 5.049 - 5.073: 97.4727% ( 3) 00:14:01.165 5.073 - 5.096: 97.4802% ( 1) 00:14:01.165 5.096 - 5.120: 97.5028% ( 3) 00:14:01.165 5.120 - 5.144: 97.5255% ( 3) 00:14:01.165 5.144 - 5.167: 97.5330% ( 1) 00:14:01.165 5.167 - 5.191: 97.5556% ( 3) 00:14:01.165 5.191 - 5.215: 97.5707% ( 2) 00:14:01.165 5.215 - 5.239: 97.5858% ( 2) 00:14:01.165 5.239 - 5.262: 97.6160% ( 4) 00:14:01.165 5.262 - 5.286: 97.6311% ( 2) 00:14:01.165 5.286 - 5.310: 97.6386% ( 1) 00:14:01.165 5.310 - 5.333: 97.6462% ( 1) 00:14:01.165 5.333 - 5.357: 97.6537% ( 1) 00:14:01.165 5.357 - 5.381: 97.6914% ( 5) 00:14:01.165 5.381 - 5.404: 97.7065% ( 2) 00:14:01.165 5.404 - 5.428: 97.7141% ( 1) 00:14:01.165 5.476 - 5.499: 97.7216% ( 1) 00:14:01.165 5.523 - 5.547: 97.7367% ( 2) 00:14:01.165 5.570 - 5.594: 97.7442% ( 1) 00:14:01.165 5.736 - 5.760: 97.7518% ( 1) 00:14:01.165 5.831 - 5.855: 97.7593% ( 1) 00:14:01.165 5.950 - 5.973: 97.7669% ( 1) 00:14:01.165 6.044 - 6.068: 97.7744% ( 1) 00:14:01.165 6.163 - 6.210: 97.7895% ( 2) 00:14:01.165 6.258 - 6.305: 97.7971% ( 1) 00:14:01.165 6.400 - 6.447: 97.8046% ( 1) 00:14:01.165 6.637 - 6.684: 97.8272% ( 3) 00:14:01.165 6.732 - 6.779: 97.8348% ( 1) 00:14:01.165 6.827 - 6.874: 97.8499% ( 2) 00:14:01.165 6.969 - 7.016: 97.8574% ( 1) 00:14:01.165 7.064 - 7.111: 97.8725% ( 2) 00:14:01.165 7.206 - 7.253: 97.9102% ( 5) 00:14:01.165 7.253 - 7.301: 97.9178% ( 1) 00:14:01.165 7.348 - 7.396: 97.9253% ( 1) 00:14:01.165 7.443 - 7.490: 97.9479% ( 3) 00:14:01.165 7.490 - 7.538: 97.9555% ( 1) 00:14:01.165 7.538 - 7.585: 97.9857% ( 4) 00:14:01.165 7.727 - 7.775: 97.9932% ( 1) 00:14:01.165 7.775 - 7.822: 98.0083% ( 2) 00:14:01.165 7.822 - 7.870: 98.0385% ( 4) 00:14:01.165 7.917 - 7.964: 98.0687% ( 4) 00:14:01.165 7.964 - 8.012: 98.0837% ( 2) 00:14:01.165 8.012 - 8.059: 98.0913% ( 1) 00:14:01.165 8.059 - 8.107: 98.1139% ( 3) 00:14:01.165 8.107 - 8.154: 98.1290% ( 2) 00:14:01.165 8.154 - 8.201: 98.1441% ( 2) 00:14:01.165 8.201 - 8.249: 98.1516% ( 1) 00:14:01.165 8.249 - 8.296: 98.1592% ( 1) 00:14:01.165 8.344 - 8.391: 98.1667% ( 1) 00:14:01.165 8.391 - 8.439: 98.1743% ( 1) 00:14:01.165 8.439 - 8.486: 98.1818% ( 1) 00:14:01.165 8.533 - 8.581: 98.1969% ( 2) 00:14:01.165 8.676 - 8.723: 98.2045% ( 1) 00:14:01.165 8.723 - 8.770: 98.2120% ( 1) 00:14:01.165 8.770 - 8.818: 98.2195% ( 1) 00:14:01.165 8.818 - 8.865: 98.2271% ( 1) 00:14:01.165 8.865 - 8.913: 98.2422% ( 2) 00:14:01.165 8.960 - 9.007: 98.2573% ( 2) 00:14:01.165 9.055 - 9.102: 98.2648% ( 1) 00:14:01.165 9.102 - 9.150: 98.2724% ( 1) 00:14:01.165 9.150 - 9.197: 98.2799% ( 1) 00:14:01.165 9.244 - 9.292: 98.2874% ( 1) 00:14:01.165 9.387 - 9.434: 98.2950% ( 1) 00:14:01.165 9.481 - 9.529: 98.3025% ( 1) 00:14:01.165 9.624 - 9.671: 98.3101% ( 1) 00:14:01.165 9.766 - 9.813: 98.3176% ( 1) 00:14:01.165 9.908 - 9.956: 98.3252% ( 1) 00:14:01.165 10.193 - 10.240: 98.3402% ( 2) 00:14:01.165 10.335 - 10.382: 98.3553% ( 2) 00:14:01.166 10.477 - 10.524: 98.3629% ( 1) 00:14:01.166 10.524 - 10.572: 98.3780% ( 2) 00:14:01.166 10.667 - 10.714: 98.3855% ( 1) 00:14:01.166 10.809 - 10.856: 98.3931% ( 1) 00:14:01.166 10.951 - 10.999: 98.4081% ( 2) 00:14:01.166 11.141 - 11.188: 98.4157% ( 1) 00:14:01.166 11.283 - 11.330: 98.4232% ( 1) 00:14:01.166 11.425 - 11.473: 98.4383% ( 2) 00:14:01.166 11.520 - 11.567: 98.4459% ( 1) 00:14:01.166 11.567 - 11.615: 98.4610% ( 2) 00:14:01.166 11.615 - 11.662: 98.4685% ( 1) 00:14:01.166 11.710 - 11.757: 98.4760% ( 1) 00:14:01.166 11.757 - 11.804: 98.4836% ( 1) 00:14:01.166 11.899 - 11.947: 98.4911% ( 1) 00:14:01.166 12.231 - 12.326: 98.5062% ( 2) 00:14:01.166 12.421 - 12.516: 98.5213% ( 2) 00:14:01.166 12.516 - 12.610: 98.5289% ( 1) 00:14:01.166 12.705 - 12.800: 98.5439% ( 2) 00:14:01.166 12.895 - 12.990: 98.5590% ( 2) 00:14:01.166 13.084 - 13.179: 98.5741% ( 2) 00:14:01.166 13.179 - 13.274: 98.5817% ( 1) 00:14:01.166 13.274 - 13.369: 98.6043% ( 3) 00:14:01.166 13.559 - 13.653: 98.6118% ( 1) 00:14:01.166 13.653 - 13.748: 98.6269% ( 2) 00:14:01.166 13.748 - 13.843: 98.6345% ( 1) 00:14:01.166 13.843 - 13.938: 98.6420% ( 1) 00:14:01.166 13.938 - 14.033: 98.6496% ( 1) 00:14:01.166 14.222 - 14.317: 98.6571% ( 1) 00:14:01.166 14.317 - 14.412: 98.6647% ( 1) 00:14:01.166 14.507 - 14.601: 98.7099% ( 6) 00:14:01.166 14.601 - 14.696: 98.7476% ( 5) 00:14:01.166 14.981 - 15.076: 98.7627% ( 2) 00:14:01.166 15.644 - 15.739: 98.7703% ( 1) 00:14:01.166 17.067 - 17.161: 98.7778% ( 1) 00:14:01.166 17.256 - 17.351: 98.7854% ( 1) 00:14:01.166 17.351 - 17.446: 98.8080% ( 3) 00:14:01.166 17.446 - 17.541: 98.8382% ( 4) 00:14:01.166 17.541 - 17.636: 98.8457% ( 1) 00:14:01.166 17.636 - 17.730: 98.8910% ( 6) 00:14:01.166 17.730 - 17.825: 98.9212% ( 4) 00:14:01.166 17.825 - 17.920: 98.9891% ( 9) 00:14:01.166 17.920 - 18.015: 99.0570% ( 9) 00:14:01.166 18.015 - 18.110: 99.1022% ( 6) 00:14:01.166 18.110 - 18.204: 99.1701% ( 9) 00:14:01.166 18.204 - 18.299: 99.2757% ( 14) 00:14:01.166 18.299 - 18.394: 99.3361% ( 8) 00:14:01.166 18.394 - 18.489: 99.4342% ( 13) 00:14:01.166 18.489 - 18.584: 99.4719% ( 5) 00:14:01.166 18.584 - 18.679: 99.5323% ( 8) 00:14:01.166 18.679 - 18.773: 99.6002% ( 9) 00:14:01.166 18.773 - 18.868: 99.6303% ( 4) 00:14:01.166 18.868 - 18.963: 99.6454% ( 2) 00:14:01.166 18.963 - 19.058: 99.6530% ( 1) 00:14:01.166 19.058 - 19.153: 99.6605% ( 1) 00:14:01.166 19.153 - 19.247: 99.6831% ( 3) 00:14:01.166 19.342 - 19.437: 99.7133% ( 4) 00:14:01.166 19.437 - 19.532: 99.7209% ( 1) 00:14:01.166 19.532 - 19.627: 99.7510% ( 4) 00:14:01.166 19.721 - 19.816: 99.7586% ( 1) 00:14:01.166 19.816 - 19.911: 99.7963% ( 5) 00:14:01.166 19.911 - 20.006: 99.8038% ( 1) 00:14:01.166 20.101 - 20.196: 99.8114% ( 1) 00:14:01.166 20.196 - 20.290: 99.8189% ( 1) 00:14:01.166 22.187 - 22.281: 99.8265% ( 1) 00:14:01.166 22.281 - 22.376: 99.8340% ( 1) 00:14:01.166 22.471 - 22.566: 99.8491% ( 2) 00:14:01.166 22.850 - 22.945: 99.8567% ( 1) 00:14:01.166 22.945 - 23.040: 99.8642% ( 1) 00:14:01.166 23.040 - 23.135: 99.8717% ( 1) 00:14:01.166 23.799 - 23.893: 99.8793% ( 1) 00:14:01.166 28.634 - 28.824: 99.8868% ( 1) 00:14:01.166 28.824 - 29.013: 99.8944% ( 1) 00:14:01.166 31.099 - 31.289: 99.9019% ( 1) 00:14:01.166 3980.705 - 4004.978: 99.9623% ( 8) 00:14:01.166 4004.978 - 4029.250: 100.0000% ( 5) 00:14:01.166 00:14:01.166 Complete histogram 00:14:01.166 ================== 00:14:01.166 Range in us Cumulative Count 00:14:01.166 2.062 - 2.074: 0.0302% ( 4) 00:14:01.166 2.074 - 2.086: 20.6790% ( 2737) 00:14:01.166 2.086 - 2.098: 43.9306% ( 3082) 00:14:01.166 2.098 - 2.110: 46.1863% ( 299) 00:14:01.166 2.110 - 2.121: 54.3116% ( 1077) 00:14:01.166 2.121 - 2.133: 58.4081% ( 543) 00:14:01.166 2.133 - 2.145: 60.2339% ( 242) 00:14:01.166 2.145 - 2.157: 70.8110% ( 1402) 00:14:01.166 2.157 - 2.169: 75.5790% ( 632) 00:14:01.166 2.169 - 2.181: 76.7937% ( 161) 00:14:01.166 2.181 - 2.193: 80.1660% ( 447) 00:14:01.166 2.193 - 2.204: 81.7578% ( 211) 00:14:01.166 2.204 - 2.216: 82.5123% ( 100) 00:14:01.166 2.216 - 2.228: 86.3071% ( 503) 00:14:01.166 2.228 - 2.240: 89.3776% ( 407) 00:14:01.166 2.240 - 2.252: 91.0902% ( 227) 00:14:01.166 2.252 - 2.264: 92.6669% ( 209) 00:14:01.166 2.264 - 2.276: 93.4892% ( 109) 00:14:01.166 2.276 - 2.287: 93.8061% ( 42) 00:14:01.166 2.287 - 2.299: 94.0400% ( 31) 00:14:01.166 2.299 - 2.311: 94.5002% ( 61) 00:14:01.166 2.311 - 2.323: 95.1943% ( 92) 00:14:01.166 2.323 - 2.335: 95.3602% ( 22) 00:14:01.166 2.335 - 2.347: 95.4508% ( 12) 00:14:01.166 2.347 - 2.359: 95.6997% ( 33) 00:14:01.166 2.359 - 2.370: 95.9562% ( 34) 00:14:01.166 2.370 - 2.382: 96.2052% ( 33) 00:14:01.166 2.382 - 2.394: 96.6579% ( 60) 00:14:01.166 2.394 - 2.406: 97.2237% ( 75) 00:14:01.166 2.406 - 2.418: 97.3897% ( 22) 00:14:01.166 2.418 - 2.430: 97.5104% ( 16) 00:14:01.166 2.430 - 2.441: 97.6613% ( 20) 00:14:01.166 2.441 - 2.453: 97.7744% ( 15) 00:14:01.166 2.453 - 2.465: 97.9178% ( 19) 00:14:01.166 2.465 - 2.477: 98.0083% ( 12) 00:14:01.166 2.477 - 2.489: 98.0913% ( 11) 00:14:01.166 2.489 - 2.501: 98.1441% ( 7) 00:14:01.166 2.501 - 2.513: 98.2120% ( 9) 00:14:01.166 2.513 - 2.524: 98.2497% ( 5) 00:14:01.166 2.524 - 2.536: 98.2799% ( 4) 00:14:01.166 2.536 - 2.548: 98.3101% ( 4) 00:14:01.166 2.548 - 2.560: 98.3327% ( 3) 00:14:01.166 2.560 - 2.572: 98.3553% ( 3) 00:14:01.166 2.572 - 2.584: 9[2024-07-15 09:47:17.547624] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:01.166 8.3780% ( 3) 00:14:01.166 2.584 - 2.596: 98.4006% ( 3) 00:14:01.166 2.596 - 2.607: 98.4157% ( 2) 00:14:01.166 2.631 - 2.643: 98.4232% ( 1) 00:14:01.166 2.667 - 2.679: 98.4308% ( 1) 00:14:01.166 2.726 - 2.738: 98.4383% ( 1) 00:14:01.166 2.738 - 2.750: 98.4459% ( 1) 00:14:01.166 2.750 - 2.761: 98.4534% ( 1) 00:14:01.166 2.844 - 2.856: 98.4610% ( 1) 00:14:01.166 2.892 - 2.904: 98.4685% ( 1) 00:14:01.166 3.295 - 3.319: 98.4760% ( 1) 00:14:01.166 3.390 - 3.413: 98.4911% ( 2) 00:14:01.166 3.437 - 3.461: 98.5062% ( 2) 00:14:01.166 3.484 - 3.508: 98.5138% ( 1) 00:14:01.166 3.532 - 3.556: 98.5213% ( 1) 00:14:01.166 3.556 - 3.579: 98.5289% ( 1) 00:14:01.166 3.579 - 3.603: 98.5364% ( 1) 00:14:01.166 3.603 - 3.627: 98.5515% ( 2) 00:14:01.166 3.650 - 3.674: 98.5590% ( 1) 00:14:01.166 3.674 - 3.698: 98.5666% ( 1) 00:14:01.166 3.721 - 3.745: 98.5817% ( 2) 00:14:01.166 3.745 - 3.769: 98.5892% ( 1) 00:14:01.166 3.816 - 3.840: 98.5968% ( 1) 00:14:01.166 4.006 - 4.030: 98.6118% ( 2) 00:14:01.166 4.101 - 4.124: 98.6194% ( 1) 00:14:01.166 4.836 - 4.859: 98.6269% ( 1) 00:14:01.166 5.025 - 5.049: 98.6345% ( 1) 00:14:01.166 5.096 - 5.120: 98.6420% ( 1) 00:14:01.166 5.191 - 5.215: 98.6496% ( 1) 00:14:01.166 5.713 - 5.736: 98.6571% ( 1) 00:14:01.166 5.831 - 5.855: 98.6647% ( 1) 00:14:01.166 5.997 - 6.021: 98.6722% ( 1) 00:14:01.166 6.021 - 6.044: 98.6797% ( 1) 00:14:01.166 6.068 - 6.116: 98.6873% ( 1) 00:14:01.166 6.258 - 6.305: 98.7024% ( 2) 00:14:01.166 6.400 - 6.447: 98.7175% ( 2) 00:14:01.166 6.542 - 6.590: 98.7250% ( 1) 00:14:01.166 6.590 - 6.637: 98.7326% ( 1) 00:14:01.166 6.637 - 6.684: 98.7401% ( 1) 00:14:01.166 6.874 - 6.921: 98.7476% ( 1) 00:14:01.166 7.822 - 7.870: 98.7552% ( 1) 00:14:01.166 11.283 - 11.330: 98.7627% ( 1) 00:14:01.166 11.852 - 11.899: 98.7703% ( 1) 00:14:01.166 11.994 - 12.041: 98.7778% ( 1) 00:14:01.166 14.033 - 14.127: 98.7854% ( 1) 00:14:01.166 15.455 - 15.550: 98.8005% ( 2) 00:14:01.166 15.550 - 15.644: 98.8080% ( 1) 00:14:01.166 15.644 - 15.739: 98.8457% ( 5) 00:14:01.166 15.739 - 15.834: 98.8533% ( 1) 00:14:01.166 15.834 - 15.929: 98.8759% ( 3) 00:14:01.166 15.929 - 16.024: 98.8910% ( 2) 00:14:01.166 16.024 - 16.119: 98.9061% ( 2) 00:14:01.166 16.119 - 16.213: 98.9513% ( 6) 00:14:01.166 16.213 - 16.308: 99.0117% ( 8) 00:14:01.167 16.308 - 16.403: 99.0268% ( 2) 00:14:01.167 16.403 - 16.498: 99.0494% ( 3) 00:14:01.167 16.498 - 16.593: 99.1098% ( 8) 00:14:01.167 16.593 - 16.687: 99.1475% ( 5) 00:14:01.167 16.687 - 16.782: 99.1777% ( 4) 00:14:01.167 16.782 - 16.877: 99.2154% ( 5) 00:14:01.167 16.877 - 16.972: 99.2380% ( 3) 00:14:01.167 16.972 - 17.067: 99.2682% ( 4) 00:14:01.167 17.067 - 17.161: 99.2833% ( 2) 00:14:01.167 17.161 - 17.256: 99.2908% ( 1) 00:14:01.167 17.256 - 17.351: 99.3135% ( 3) 00:14:01.167 17.351 - 17.446: 99.3210% ( 1) 00:14:01.167 17.730 - 17.825: 99.3286% ( 1) 00:14:01.167 17.825 - 17.920: 99.3361% ( 1) 00:14:01.167 18.015 - 18.110: 99.3512% ( 2) 00:14:01.167 18.394 - 18.489: 99.3587% ( 1) 00:14:01.167 70.542 - 70.921: 99.3663% ( 1) 00:14:01.167 3980.705 - 4004.978: 99.8038% ( 58) 00:14:01.167 4004.978 - 4029.250: 100.0000% ( 26) 00:14:01.167 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:01.167 [ 00:14:01.167 { 00:14:01.167 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:01.167 "subtype": "Discovery", 00:14:01.167 "listen_addresses": [], 00:14:01.167 "allow_any_host": true, 00:14:01.167 "hosts": [] 00:14:01.167 }, 00:14:01.167 { 00:14:01.167 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:01.167 "subtype": "NVMe", 00:14:01.167 "listen_addresses": [ 00:14:01.167 { 00:14:01.167 "trtype": "VFIOUSER", 00:14:01.167 "adrfam": "IPv4", 00:14:01.167 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:01.167 "trsvcid": "0" 00:14:01.167 } 00:14:01.167 ], 00:14:01.167 "allow_any_host": true, 00:14:01.167 "hosts": [], 00:14:01.167 "serial_number": "SPDK1", 00:14:01.167 "model_number": "SPDK bdev Controller", 00:14:01.167 "max_namespaces": 32, 00:14:01.167 "min_cntlid": 1, 00:14:01.167 "max_cntlid": 65519, 00:14:01.167 "namespaces": [ 00:14:01.167 { 00:14:01.167 "nsid": 1, 00:14:01.167 "bdev_name": "Malloc1", 00:14:01.167 "name": "Malloc1", 00:14:01.167 "nguid": "7A707BA1D01A4116BA74823A4EA58C6B", 00:14:01.167 "uuid": "7a707ba1-d01a-4116-ba74-823a4ea58c6b" 00:14:01.167 } 00:14:01.167 ] 00:14:01.167 }, 00:14:01.167 { 00:14:01.167 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:01.167 "subtype": "NVMe", 00:14:01.167 "listen_addresses": [ 00:14:01.167 { 00:14:01.167 "trtype": "VFIOUSER", 00:14:01.167 "adrfam": "IPv4", 00:14:01.167 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:01.167 "trsvcid": "0" 00:14:01.167 } 00:14:01.167 ], 00:14:01.167 "allow_any_host": true, 00:14:01.167 "hosts": [], 00:14:01.167 "serial_number": "SPDK2", 00:14:01.167 "model_number": "SPDK bdev Controller", 00:14:01.167 "max_namespaces": 32, 00:14:01.167 "min_cntlid": 1, 00:14:01.167 "max_cntlid": 65519, 00:14:01.167 "namespaces": [ 00:14:01.167 { 00:14:01.167 "nsid": 1, 00:14:01.167 "bdev_name": "Malloc2", 00:14:01.167 "name": "Malloc2", 00:14:01.167 "nguid": "D05F88AC6BC54DE9A9D870432FD38B6D", 00:14:01.167 "uuid": "d05f88ac-6bc5-4de9-a9d8-70432fd38b6d" 00:14:01.167 } 00:14:01.167 ] 00:14:01.167 } 00:14:01.167 ] 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1862693 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:01.167 09:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:01.167 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.425 [2024-07-15 09:47:18.048356] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:01.425 Malloc3 00:14:01.425 09:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:01.683 [2024-07-15 09:47:18.421227] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:01.683 09:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:01.683 Asynchronous Event Request test 00:14:01.683 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:01.683 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:01.683 Registering asynchronous event callbacks... 00:14:01.683 Starting namespace attribute notice tests for all controllers... 00:14:01.683 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:01.683 aer_cb - Changed Namespace 00:14:01.683 Cleaning up... 00:14:01.942 [ 00:14:01.942 { 00:14:01.942 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:01.942 "subtype": "Discovery", 00:14:01.942 "listen_addresses": [], 00:14:01.942 "allow_any_host": true, 00:14:01.942 "hosts": [] 00:14:01.942 }, 00:14:01.942 { 00:14:01.942 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:01.942 "subtype": "NVMe", 00:14:01.942 "listen_addresses": [ 00:14:01.942 { 00:14:01.942 "trtype": "VFIOUSER", 00:14:01.942 "adrfam": "IPv4", 00:14:01.942 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:01.942 "trsvcid": "0" 00:14:01.942 } 00:14:01.942 ], 00:14:01.942 "allow_any_host": true, 00:14:01.942 "hosts": [], 00:14:01.942 "serial_number": "SPDK1", 00:14:01.942 "model_number": "SPDK bdev Controller", 00:14:01.942 "max_namespaces": 32, 00:14:01.942 "min_cntlid": 1, 00:14:01.942 "max_cntlid": 65519, 00:14:01.942 "namespaces": [ 00:14:01.942 { 00:14:01.942 "nsid": 1, 00:14:01.942 "bdev_name": "Malloc1", 00:14:01.942 "name": "Malloc1", 00:14:01.942 "nguid": "7A707BA1D01A4116BA74823A4EA58C6B", 00:14:01.942 "uuid": "7a707ba1-d01a-4116-ba74-823a4ea58c6b" 00:14:01.942 }, 00:14:01.942 { 00:14:01.942 "nsid": 2, 00:14:01.942 "bdev_name": "Malloc3", 00:14:01.942 "name": "Malloc3", 00:14:01.942 "nguid": "5EF1449F2D34470384BFAA8871AF4F31", 00:14:01.942 "uuid": "5ef1449f-2d34-4703-84bf-aa8871af4f31" 00:14:01.942 } 00:14:01.942 ] 00:14:01.942 }, 00:14:01.942 { 00:14:01.942 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:01.942 "subtype": "NVMe", 00:14:01.942 "listen_addresses": [ 00:14:01.942 { 00:14:01.942 "trtype": "VFIOUSER", 00:14:01.942 "adrfam": "IPv4", 00:14:01.942 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:01.942 "trsvcid": "0" 00:14:01.942 } 00:14:01.942 ], 00:14:01.942 "allow_any_host": true, 00:14:01.942 "hosts": [], 00:14:01.942 "serial_number": "SPDK2", 00:14:01.942 "model_number": "SPDK bdev Controller", 00:14:01.942 "max_namespaces": 32, 00:14:01.942 "min_cntlid": 1, 00:14:01.942 "max_cntlid": 65519, 00:14:01.942 "namespaces": [ 00:14:01.942 { 00:14:01.942 "nsid": 1, 00:14:01.942 "bdev_name": "Malloc2", 00:14:01.942 "name": "Malloc2", 00:14:01.942 "nguid": "D05F88AC6BC54DE9A9D870432FD38B6D", 00:14:01.942 "uuid": "d05f88ac-6bc5-4de9-a9d8-70432fd38b6d" 00:14:01.942 } 00:14:01.942 ] 00:14:01.942 } 00:14:01.942 ] 00:14:01.942 09:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1862693 00:14:01.942 09:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:01.942 09:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:01.942 09:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:01.942 09:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:01.942 [2024-07-15 09:47:18.686765] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:01.942 [2024-07-15 09:47:18.686804] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862829 ] 00:14:01.942 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.942 [2024-07-15 09:47:18.703368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:01.942 [2024-07-15 09:47:18.720837] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:02.202 [2024-07-15 09:47:18.727188] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:02.202 [2024-07-15 09:47:18.727237] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fef16893000 00:14:02.202 [2024-07-15 09:47:18.728173] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.202 [2024-07-15 09:47:18.729199] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.202 [2024-07-15 09:47:18.730178] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.202 [2024-07-15 09:47:18.731180] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:02.202 [2024-07-15 09:47:18.732204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:02.202 [2024-07-15 09:47:18.733213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.202 [2024-07-15 09:47:18.734202] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:02.202 [2024-07-15 09:47:18.735209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.202 [2024-07-15 09:47:18.736220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:02.202 [2024-07-15 09:47:18.736241] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fef15655000 00:14:02.202 [2024-07-15 09:47:18.737354] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:02.202 [2024-07-15 09:47:18.751536] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:02.202 [2024-07-15 09:47:18.751565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:02.202 [2024-07-15 09:47:18.756666] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:02.202 [2024-07-15 09:47:18.756714] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:02.202 [2024-07-15 09:47:18.756797] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:02.202 [2024-07-15 09:47:18.756818] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:02.202 [2024-07-15 09:47:18.756827] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:02.202 [2024-07-15 09:47:18.757671] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:02.202 [2024-07-15 09:47:18.757690] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:02.202 [2024-07-15 09:47:18.757703] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:02.202 [2024-07-15 09:47:18.758682] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:02.203 [2024-07-15 09:47:18.758701] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:02.203 [2024-07-15 09:47:18.758713] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:02.203 [2024-07-15 09:47:18.759693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:02.203 [2024-07-15 09:47:18.759712] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:02.203 [2024-07-15 09:47:18.760704] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:02.203 [2024-07-15 09:47:18.760723] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:02.203 [2024-07-15 09:47:18.760732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:02.203 [2024-07-15 09:47:18.760743] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:02.203 [2024-07-15 09:47:18.760852] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:02.203 [2024-07-15 09:47:18.760866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:02.203 [2024-07-15 09:47:18.760875] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:02.203 [2024-07-15 09:47:18.761711] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:02.203 [2024-07-15 09:47:18.762712] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:02.203 [2024-07-15 09:47:18.763722] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:02.203 [2024-07-15 09:47:18.764716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:02.203 [2024-07-15 09:47:18.764793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:02.203 [2024-07-15 09:47:18.765732] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:02.203 [2024-07-15 09:47:18.765751] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:02.203 [2024-07-15 09:47:18.765760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.765783] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:02.203 [2024-07-15 09:47:18.765796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.765813] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:02.203 [2024-07-15 09:47:18.765822] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:02.203 [2024-07-15 09:47:18.765838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:02.203 [2024-07-15 09:47:18.773893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:02.203 [2024-07-15 09:47:18.773914] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:02.203 [2024-07-15 09:47:18.773928] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:02.203 [2024-07-15 09:47:18.773937] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:02.203 [2024-07-15 09:47:18.773944] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:02.203 [2024-07-15 09:47:18.773952] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:02.203 [2024-07-15 09:47:18.773960] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:02.203 [2024-07-15 09:47:18.773968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.773981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.773996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:02.203 [2024-07-15 09:47:18.781888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:02.203 [2024-07-15 09:47:18.781934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.203 [2024-07-15 09:47:18.781950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.203 [2024-07-15 09:47:18.781963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.203 [2024-07-15 09:47:18.781975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.203 [2024-07-15 09:47:18.781984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.782000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.782015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:02.203 [2024-07-15 09:47:18.789901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:02.203 [2024-07-15 09:47:18.789918] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:02.203 [2024-07-15 09:47:18.789928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.789939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.789949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.789963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:02.203 [2024-07-15 09:47:18.797887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:02.203 [2024-07-15 09:47:18.797955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.797969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.797982] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:02.203 [2024-07-15 09:47:18.797990] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:02.203 [2024-07-15 09:47:18.797999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:02.203 [2024-07-15 09:47:18.805901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:02.203 [2024-07-15 09:47:18.805929] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:02.203 [2024-07-15 09:47:18.805945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.805959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.805971] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:02.203 [2024-07-15 09:47:18.805980] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:02.203 [2024-07-15 09:47:18.805993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:02.203 [2024-07-15 09:47:18.813900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:02.203 [2024-07-15 09:47:18.813926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.813942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:02.203 [2024-07-15 09:47:18.813955] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:02.203 [2024-07-15 09:47:18.813964] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:02.204 [2024-07-15 09:47:18.813973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:02.204 [2024-07-15 09:47:18.821899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:02.204 [2024-07-15 09:47:18.821920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:02.204 [2024-07-15 09:47:18.821933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:02.204 [2024-07-15 09:47:18.821946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:02.204 [2024-07-15 09:47:18.821957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:02.204 [2024-07-15 09:47:18.821965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:02.204 [2024-07-15 09:47:18.821973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:02.204 [2024-07-15 09:47:18.821981] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:02.204 [2024-07-15 09:47:18.821989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:02.204 [2024-07-15 09:47:18.821997] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:02.204 [2024-07-15 09:47:18.822020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:02.204 [2024-07-15 09:47:18.829899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:02.204 [2024-07-15 09:47:18.829926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:02.204 [2024-07-15 09:47:18.837889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:02.204 [2024-07-15 09:47:18.837914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:02.204 [2024-07-15 09:47:18.845901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:02.204 [2024-07-15 09:47:18.845927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:02.204 [2024-07-15 09:47:18.853902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:02.204 [2024-07-15 09:47:18.853938] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:02.204 [2024-07-15 09:47:18.853950] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:02.204 [2024-07-15 09:47:18.853956] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:02.204 [2024-07-15 09:47:18.853962] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:02.204 [2024-07-15 09:47:18.853971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:02.204 [2024-07-15 09:47:18.853983] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:02.204 [2024-07-15 09:47:18.853991] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:02.204 [2024-07-15 09:47:18.854000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:02.204 [2024-07-15 09:47:18.854010] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:02.204 [2024-07-15 09:47:18.854018] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:02.204 [2024-07-15 09:47:18.854027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:02.204 [2024-07-15 09:47:18.854038] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:02.204 [2024-07-15 09:47:18.854046] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:02.204 [2024-07-15 09:47:18.854055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:02.204 [2024-07-15 09:47:18.861902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:02.204 [2024-07-15 09:47:18.861928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:02.204 [2024-07-15 09:47:18.861945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:02.204 [2024-07-15 09:47:18.861957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:02.204 ===================================================== 00:14:02.204 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:02.204 ===================================================== 00:14:02.204 Controller Capabilities/Features 00:14:02.204 ================================ 00:14:02.204 Vendor ID: 4e58 00:14:02.204 Subsystem Vendor ID: 4e58 00:14:02.204 Serial Number: SPDK2 00:14:02.204 Model Number: SPDK bdev Controller 00:14:02.204 Firmware Version: 24.09 00:14:02.204 Recommended Arb Burst: 6 00:14:02.204 IEEE OUI Identifier: 8d 6b 50 00:14:02.204 Multi-path I/O 00:14:02.204 May have multiple subsystem ports: Yes 00:14:02.204 May have multiple controllers: Yes 00:14:02.204 Associated with SR-IOV VF: No 00:14:02.204 Max Data Transfer Size: 131072 00:14:02.204 Max Number of Namespaces: 32 00:14:02.204 Max Number of I/O Queues: 127 00:14:02.204 NVMe Specification Version (VS): 1.3 00:14:02.204 NVMe Specification Version (Identify): 1.3 00:14:02.204 Maximum Queue Entries: 256 00:14:02.204 Contiguous Queues Required: Yes 00:14:02.204 Arbitration Mechanisms Supported 00:14:02.204 Weighted Round Robin: Not Supported 00:14:02.204 Vendor Specific: Not Supported 00:14:02.204 Reset Timeout: 15000 ms 00:14:02.204 Doorbell Stride: 4 bytes 00:14:02.204 NVM Subsystem Reset: Not Supported 00:14:02.204 Command Sets Supported 00:14:02.204 NVM Command Set: Supported 00:14:02.204 Boot Partition: Not Supported 00:14:02.204 Memory Page Size Minimum: 4096 bytes 00:14:02.204 Memory Page Size Maximum: 4096 bytes 00:14:02.204 Persistent Memory Region: Not Supported 00:14:02.204 Optional Asynchronous Events Supported 00:14:02.204 Namespace Attribute Notices: Supported 00:14:02.204 Firmware Activation Notices: Not Supported 00:14:02.204 ANA Change Notices: Not Supported 00:14:02.204 PLE Aggregate Log Change Notices: Not Supported 00:14:02.204 LBA Status Info Alert Notices: Not Supported 00:14:02.204 EGE Aggregate Log Change Notices: Not Supported 00:14:02.204 Normal NVM Subsystem Shutdown event: Not Supported 00:14:02.204 Zone Descriptor Change Notices: Not Supported 00:14:02.204 Discovery Log Change Notices: Not Supported 00:14:02.204 Controller Attributes 00:14:02.204 128-bit Host Identifier: Supported 00:14:02.204 Non-Operational Permissive Mode: Not Supported 00:14:02.204 NVM Sets: Not Supported 00:14:02.204 Read Recovery Levels: Not Supported 00:14:02.204 Endurance Groups: Not Supported 00:14:02.204 Predictable Latency Mode: Not Supported 00:14:02.204 Traffic Based Keep ALive: Not Supported 00:14:02.204 Namespace Granularity: Not Supported 00:14:02.204 SQ Associations: Not Supported 00:14:02.204 UUID List: Not Supported 00:14:02.204 Multi-Domain Subsystem: Not Supported 00:14:02.204 Fixed Capacity Management: Not Supported 00:14:02.205 Variable Capacity Management: Not Supported 00:14:02.205 Delete Endurance Group: Not Supported 00:14:02.205 Delete NVM Set: Not Supported 00:14:02.205 Extended LBA Formats Supported: Not Supported 00:14:02.205 Flexible Data Placement Supported: Not Supported 00:14:02.205 00:14:02.205 Controller Memory Buffer Support 00:14:02.205 ================================ 00:14:02.205 Supported: No 00:14:02.205 00:14:02.205 Persistent Memory Region Support 00:14:02.205 ================================ 00:14:02.205 Supported: No 00:14:02.205 00:14:02.205 Admin Command Set Attributes 00:14:02.205 ============================ 00:14:02.205 Security Send/Receive: Not Supported 00:14:02.205 Format NVM: Not Supported 00:14:02.205 Firmware Activate/Download: Not Supported 00:14:02.205 Namespace Management: Not Supported 00:14:02.205 Device Self-Test: Not Supported 00:14:02.205 Directives: Not Supported 00:14:02.205 NVMe-MI: Not Supported 00:14:02.205 Virtualization Management: Not Supported 00:14:02.205 Doorbell Buffer Config: Not Supported 00:14:02.205 Get LBA Status Capability: Not Supported 00:14:02.205 Command & Feature Lockdown Capability: Not Supported 00:14:02.205 Abort Command Limit: 4 00:14:02.205 Async Event Request Limit: 4 00:14:02.205 Number of Firmware Slots: N/A 00:14:02.205 Firmware Slot 1 Read-Only: N/A 00:14:02.205 Firmware Activation Without Reset: N/A 00:14:02.205 Multiple Update Detection Support: N/A 00:14:02.205 Firmware Update Granularity: No Information Provided 00:14:02.205 Per-Namespace SMART Log: No 00:14:02.205 Asymmetric Namespace Access Log Page: Not Supported 00:14:02.205 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:02.205 Command Effects Log Page: Supported 00:14:02.205 Get Log Page Extended Data: Supported 00:14:02.205 Telemetry Log Pages: Not Supported 00:14:02.205 Persistent Event Log Pages: Not Supported 00:14:02.205 Supported Log Pages Log Page: May Support 00:14:02.205 Commands Supported & Effects Log Page: Not Supported 00:14:02.205 Feature Identifiers & Effects Log Page:May Support 00:14:02.205 NVMe-MI Commands & Effects Log Page: May Support 00:14:02.205 Data Area 4 for Telemetry Log: Not Supported 00:14:02.205 Error Log Page Entries Supported: 128 00:14:02.205 Keep Alive: Supported 00:14:02.205 Keep Alive Granularity: 10000 ms 00:14:02.205 00:14:02.205 NVM Command Set Attributes 00:14:02.205 ========================== 00:14:02.205 Submission Queue Entry Size 00:14:02.205 Max: 64 00:14:02.205 Min: 64 00:14:02.205 Completion Queue Entry Size 00:14:02.205 Max: 16 00:14:02.205 Min: 16 00:14:02.205 Number of Namespaces: 32 00:14:02.205 Compare Command: Supported 00:14:02.205 Write Uncorrectable Command: Not Supported 00:14:02.205 Dataset Management Command: Supported 00:14:02.205 Write Zeroes Command: Supported 00:14:02.205 Set Features Save Field: Not Supported 00:14:02.205 Reservations: Not Supported 00:14:02.205 Timestamp: Not Supported 00:14:02.205 Copy: Supported 00:14:02.205 Volatile Write Cache: Present 00:14:02.205 Atomic Write Unit (Normal): 1 00:14:02.205 Atomic Write Unit (PFail): 1 00:14:02.205 Atomic Compare & Write Unit: 1 00:14:02.205 Fused Compare & Write: Supported 00:14:02.205 Scatter-Gather List 00:14:02.205 SGL Command Set: Supported (Dword aligned) 00:14:02.205 SGL Keyed: Not Supported 00:14:02.205 SGL Bit Bucket Descriptor: Not Supported 00:14:02.205 SGL Metadata Pointer: Not Supported 00:14:02.205 Oversized SGL: Not Supported 00:14:02.205 SGL Metadata Address: Not Supported 00:14:02.205 SGL Offset: Not Supported 00:14:02.205 Transport SGL Data Block: Not Supported 00:14:02.205 Replay Protected Memory Block: Not Supported 00:14:02.205 00:14:02.205 Firmware Slot Information 00:14:02.205 ========================= 00:14:02.205 Active slot: 1 00:14:02.205 Slot 1 Firmware Revision: 24.09 00:14:02.205 00:14:02.205 00:14:02.205 Commands Supported and Effects 00:14:02.205 ============================== 00:14:02.205 Admin Commands 00:14:02.205 -------------- 00:14:02.205 Get Log Page (02h): Supported 00:14:02.205 Identify (06h): Supported 00:14:02.205 Abort (08h): Supported 00:14:02.205 Set Features (09h): Supported 00:14:02.205 Get Features (0Ah): Supported 00:14:02.205 Asynchronous Event Request (0Ch): Supported 00:14:02.205 Keep Alive (18h): Supported 00:14:02.205 I/O Commands 00:14:02.205 ------------ 00:14:02.205 Flush (00h): Supported LBA-Change 00:14:02.205 Write (01h): Supported LBA-Change 00:14:02.205 Read (02h): Supported 00:14:02.205 Compare (05h): Supported 00:14:02.205 Write Zeroes (08h): Supported LBA-Change 00:14:02.205 Dataset Management (09h): Supported LBA-Change 00:14:02.205 Copy (19h): Supported LBA-Change 00:14:02.205 00:14:02.205 Error Log 00:14:02.205 ========= 00:14:02.205 00:14:02.205 Arbitration 00:14:02.205 =========== 00:14:02.205 Arbitration Burst: 1 00:14:02.205 00:14:02.205 Power Management 00:14:02.205 ================ 00:14:02.205 Number of Power States: 1 00:14:02.205 Current Power State: Power State #0 00:14:02.205 Power State #0: 00:14:02.205 Max Power: 0.00 W 00:14:02.205 Non-Operational State: Operational 00:14:02.205 Entry Latency: Not Reported 00:14:02.205 Exit Latency: Not Reported 00:14:02.205 Relative Read Throughput: 0 00:14:02.205 Relative Read Latency: 0 00:14:02.205 Relative Write Throughput: 0 00:14:02.205 Relative Write Latency: 0 00:14:02.205 Idle Power: Not Reported 00:14:02.205 Active Power: Not Reported 00:14:02.205 Non-Operational Permissive Mode: Not Supported 00:14:02.205 00:14:02.205 Health Information 00:14:02.205 ================== 00:14:02.205 Critical Warnings: 00:14:02.205 Available Spare Space: OK 00:14:02.205 Temperature: OK 00:14:02.205 Device Reliability: OK 00:14:02.205 Read Only: No 00:14:02.205 Volatile Memory Backup: OK 00:14:02.205 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:02.205 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:02.205 Available Spare: 0% 00:14:02.205 Available Sp[2024-07-15 09:47:18.862076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:02.205 [2024-07-15 09:47:18.869904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:02.205 [2024-07-15 09:47:18.869967] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:02.205 [2024-07-15 09:47:18.869984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.205 [2024-07-15 09:47:18.869995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.206 [2024-07-15 09:47:18.870005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.206 [2024-07-15 09:47:18.870015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.206 [2024-07-15 09:47:18.870104] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:02.206 [2024-07-15 09:47:18.870124] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:02.206 [2024-07-15 09:47:18.871107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:02.206 [2024-07-15 09:47:18.871196] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:02.206 [2024-07-15 09:47:18.871211] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:02.206 [2024-07-15 09:47:18.872118] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:02.206 [2024-07-15 09:47:18.872141] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:02.206 [2024-07-15 09:47:18.872204] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:02.206 [2024-07-15 09:47:18.873414] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:02.206 are Threshold: 0% 00:14:02.206 Life Percentage Used: 0% 00:14:02.206 Data Units Read: 0 00:14:02.206 Data Units Written: 0 00:14:02.206 Host Read Commands: 0 00:14:02.206 Host Write Commands: 0 00:14:02.206 Controller Busy Time: 0 minutes 00:14:02.206 Power Cycles: 0 00:14:02.206 Power On Hours: 0 hours 00:14:02.206 Unsafe Shutdowns: 0 00:14:02.206 Unrecoverable Media Errors: 0 00:14:02.206 Lifetime Error Log Entries: 0 00:14:02.206 Warning Temperature Time: 0 minutes 00:14:02.206 Critical Temperature Time: 0 minutes 00:14:02.206 00:14:02.206 Number of Queues 00:14:02.206 ================ 00:14:02.206 Number of I/O Submission Queues: 127 00:14:02.206 Number of I/O Completion Queues: 127 00:14:02.206 00:14:02.206 Active Namespaces 00:14:02.206 ================= 00:14:02.206 Namespace ID:1 00:14:02.206 Error Recovery Timeout: Unlimited 00:14:02.206 Command Set Identifier: NVM (00h) 00:14:02.206 Deallocate: Supported 00:14:02.206 Deallocated/Unwritten Error: Not Supported 00:14:02.206 Deallocated Read Value: Unknown 00:14:02.206 Deallocate in Write Zeroes: Not Supported 00:14:02.206 Deallocated Guard Field: 0xFFFF 00:14:02.206 Flush: Supported 00:14:02.206 Reservation: Supported 00:14:02.206 Namespace Sharing Capabilities: Multiple Controllers 00:14:02.206 Size (in LBAs): 131072 (0GiB) 00:14:02.206 Capacity (in LBAs): 131072 (0GiB) 00:14:02.206 Utilization (in LBAs): 131072 (0GiB) 00:14:02.206 NGUID: D05F88AC6BC54DE9A9D870432FD38B6D 00:14:02.206 UUID: d05f88ac-6bc5-4de9-a9d8-70432fd38b6d 00:14:02.206 Thin Provisioning: Not Supported 00:14:02.206 Per-NS Atomic Units: Yes 00:14:02.206 Atomic Boundary Size (Normal): 0 00:14:02.206 Atomic Boundary Size (PFail): 0 00:14:02.206 Atomic Boundary Offset: 0 00:14:02.206 Maximum Single Source Range Length: 65535 00:14:02.206 Maximum Copy Length: 65535 00:14:02.206 Maximum Source Range Count: 1 00:14:02.206 NGUID/EUI64 Never Reused: No 00:14:02.206 Namespace Write Protected: No 00:14:02.206 Number of LBA Formats: 1 00:14:02.206 Current LBA Format: LBA Format #00 00:14:02.206 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:02.206 00:14:02.206 09:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:02.206 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.464 [2024-07-15 09:47:19.102645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:07.739 Initializing NVMe Controllers 00:14:07.739 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:07.739 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:07.739 Initialization complete. Launching workers. 00:14:07.739 ======================================================== 00:14:07.739 Latency(us) 00:14:07.739 Device Information : IOPS MiB/s Average min max 00:14:07.739 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34455.98 134.59 3714.01 1177.66 7396.85 00:14:07.739 ======================================================== 00:14:07.739 Total : 34455.98 134.59 3714.01 1177.66 7396.85 00:14:07.739 00:14:07.739 [2024-07-15 09:47:24.207289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.739 09:47:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:07.739 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.739 [2024-07-15 09:47:24.445870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:13.022 Initializing NVMe Controllers 00:14:13.022 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:13.022 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:13.022 Initialization complete. Launching workers. 00:14:13.022 ======================================================== 00:14:13.022 Latency(us) 00:14:13.022 Device Information : IOPS MiB/s Average min max 00:14:13.022 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31763.80 124.08 4029.48 1215.17 7503.92 00:14:13.022 ======================================================== 00:14:13.022 Total : 31763.80 124.08 4029.48 1215.17 7503.92 00:14:13.022 00:14:13.022 [2024-07-15 09:47:29.467342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:13.022 09:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:13.022 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.022 [2024-07-15 09:47:29.676438] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:18.316 [2024-07-15 09:47:34.805010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:18.316 Initializing NVMe Controllers 00:14:18.316 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:18.316 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:18.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:18.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:18.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:18.316 Initialization complete. Launching workers. 00:14:18.316 Starting thread on core 2 00:14:18.316 Starting thread on core 3 00:14:18.316 Starting thread on core 1 00:14:18.316 09:47:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:18.316 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.316 [2024-07-15 09:47:35.094194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:21.605 [2024-07-15 09:47:38.143305] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:21.605 Initializing NVMe Controllers 00:14:21.605 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.605 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.605 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:21.605 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:21.605 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:21.605 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:21.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:21.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:21.605 Initialization complete. Launching workers. 00:14:21.605 Starting thread on core 1 with urgent priority queue 00:14:21.605 Starting thread on core 2 with urgent priority queue 00:14:21.605 Starting thread on core 3 with urgent priority queue 00:14:21.605 Starting thread on core 0 with urgent priority queue 00:14:21.605 SPDK bdev Controller (SPDK2 ) core 0: 6045.33 IO/s 16.54 secs/100000 ios 00:14:21.605 SPDK bdev Controller (SPDK2 ) core 1: 6466.00 IO/s 15.47 secs/100000 ios 00:14:21.605 SPDK bdev Controller (SPDK2 ) core 2: 5794.33 IO/s 17.26 secs/100000 ios 00:14:21.605 SPDK bdev Controller (SPDK2 ) core 3: 5842.67 IO/s 17.12 secs/100000 ios 00:14:21.605 ======================================================== 00:14:21.605 00:14:21.606 09:47:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:21.606 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.865 [2024-07-15 09:47:38.433372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:21.865 Initializing NVMe Controllers 00:14:21.865 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.865 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:21.865 Namespace ID: 1 size: 0GB 00:14:21.865 Initialization complete. 00:14:21.865 INFO: using host memory buffer for IO 00:14:21.865 Hello world! 00:14:21.865 [2024-07-15 09:47:38.446466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:21.865 09:47:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:21.865 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.123 [2024-07-15 09:47:38.744192] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:23.059 Initializing NVMe Controllers 00:14:23.059 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:23.059 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:23.059 Initialization complete. Launching workers. 00:14:23.059 submit (in ns) avg, min, max = 7988.6, 3556.7, 4019548.9 00:14:23.059 complete (in ns) avg, min, max = 26153.0, 2064.4, 4025883.3 00:14:23.059 00:14:23.059 Submit histogram 00:14:23.059 ================ 00:14:23.059 Range in us Cumulative Count 00:14:23.059 3.556 - 3.579: 0.8122% ( 106) 00:14:23.059 3.579 - 3.603: 2.0382% ( 160) 00:14:23.059 3.603 - 3.627: 5.4555% ( 446) 00:14:23.059 3.627 - 3.650: 11.7845% ( 826) 00:14:23.059 3.650 - 3.674: 20.7187% ( 1166) 00:14:23.059 3.674 - 3.698: 29.6223% ( 1162) 00:14:23.059 3.698 - 3.721: 37.9511% ( 1087) 00:14:23.059 3.721 - 3.745: 44.4640% ( 850) 00:14:23.059 3.745 - 3.769: 50.3256% ( 765) 00:14:23.059 3.769 - 3.793: 55.8731% ( 724) 00:14:23.059 3.793 - 3.816: 60.0414% ( 544) 00:14:23.059 3.816 - 3.840: 63.5354% ( 456) 00:14:23.059 3.840 - 3.864: 66.8608% ( 434) 00:14:23.059 3.864 - 3.887: 70.5693% ( 484) 00:14:23.059 3.887 - 3.911: 74.3008% ( 487) 00:14:23.059 3.911 - 3.935: 78.7373% ( 579) 00:14:23.059 3.935 - 3.959: 82.1240% ( 442) 00:14:23.059 3.959 - 3.982: 84.8594% ( 357) 00:14:23.059 3.982 - 4.006: 87.1427% ( 298) 00:14:23.059 4.006 - 4.030: 88.8591% ( 224) 00:14:23.059 4.030 - 4.053: 90.3762% ( 198) 00:14:23.059 4.053 - 4.077: 91.4566% ( 141) 00:14:23.059 4.077 - 4.101: 92.5370% ( 141) 00:14:23.059 4.101 - 4.124: 93.5714% ( 135) 00:14:23.059 4.124 - 4.148: 94.3069% ( 96) 00:14:23.059 4.148 - 4.172: 95.0349% ( 95) 00:14:23.059 4.172 - 4.196: 95.5176% ( 63) 00:14:23.059 4.196 - 4.219: 95.8854% ( 48) 00:14:23.059 4.219 - 4.243: 96.1382% ( 33) 00:14:23.059 4.243 - 4.267: 96.2991% ( 21) 00:14:23.059 4.267 - 4.290: 96.4754% ( 23) 00:14:23.059 4.290 - 4.314: 96.6516% ( 23) 00:14:23.059 4.314 - 4.338: 96.7512% ( 13) 00:14:23.059 4.338 - 4.361: 96.8968% ( 19) 00:14:23.060 4.361 - 4.385: 96.9811% ( 11) 00:14:23.060 4.385 - 4.409: 97.0194% ( 5) 00:14:23.060 4.409 - 4.433: 97.0577% ( 5) 00:14:23.060 4.433 - 4.456: 97.0654% ( 1) 00:14:23.060 4.456 - 4.480: 97.0883% ( 3) 00:14:23.060 4.480 - 4.504: 97.1343% ( 6) 00:14:23.060 4.504 - 4.527: 97.1420% ( 1) 00:14:23.060 4.527 - 4.551: 97.1573% ( 2) 00:14:23.060 4.551 - 4.575: 97.1880% ( 4) 00:14:23.060 4.575 - 4.599: 97.2109% ( 3) 00:14:23.060 4.599 - 4.622: 97.2416% ( 4) 00:14:23.060 4.622 - 4.646: 97.2646% ( 3) 00:14:23.060 4.646 - 4.670: 97.3182% ( 7) 00:14:23.060 4.670 - 4.693: 97.3335% ( 2) 00:14:23.060 4.693 - 4.717: 97.3718% ( 5) 00:14:23.060 4.717 - 4.741: 97.3795% ( 1) 00:14:23.060 4.741 - 4.764: 97.3948% ( 2) 00:14:23.060 4.764 - 4.788: 97.4102% ( 2) 00:14:23.060 4.788 - 4.812: 97.4331% ( 3) 00:14:23.060 4.812 - 4.836: 97.4791% ( 6) 00:14:23.060 4.836 - 4.859: 97.5328% ( 7) 00:14:23.060 4.859 - 4.883: 97.5711% ( 5) 00:14:23.060 4.883 - 4.907: 97.6017% ( 4) 00:14:23.060 4.907 - 4.930: 97.6554% ( 7) 00:14:23.060 4.930 - 4.954: 97.7013% ( 6) 00:14:23.060 4.954 - 4.978: 97.7550% ( 7) 00:14:23.060 4.978 - 5.001: 97.8009% ( 6) 00:14:23.060 5.001 - 5.025: 97.8392% ( 5) 00:14:23.060 5.025 - 5.049: 97.8622% ( 3) 00:14:23.060 5.049 - 5.073: 97.8776% ( 2) 00:14:23.060 5.073 - 5.096: 97.9159% ( 5) 00:14:23.060 5.096 - 5.120: 97.9465% ( 4) 00:14:23.060 5.120 - 5.144: 97.9772% ( 4) 00:14:23.060 5.144 - 5.167: 97.9925% ( 2) 00:14:23.060 5.167 - 5.191: 98.0385% ( 6) 00:14:23.060 5.191 - 5.215: 98.0461% ( 1) 00:14:23.060 5.215 - 5.239: 98.0844% ( 5) 00:14:23.060 5.239 - 5.262: 98.1151% ( 4) 00:14:23.060 5.262 - 5.286: 98.1227% ( 1) 00:14:23.060 5.310 - 5.333: 98.1304% ( 1) 00:14:23.060 5.333 - 5.357: 98.1611% ( 4) 00:14:23.060 5.404 - 5.428: 98.1840% ( 3) 00:14:23.060 5.452 - 5.476: 98.1917% ( 1) 00:14:23.060 5.476 - 5.499: 98.1994% ( 1) 00:14:23.060 5.594 - 5.618: 98.2147% ( 2) 00:14:23.060 5.689 - 5.713: 98.2224% ( 1) 00:14:23.060 5.736 - 5.760: 98.2300% ( 1) 00:14:23.060 5.784 - 5.807: 98.2377% ( 1) 00:14:23.060 5.807 - 5.831: 98.2453% ( 1) 00:14:23.060 5.855 - 5.879: 98.2530% ( 1) 00:14:23.060 5.879 - 5.902: 98.2607% ( 1) 00:14:23.060 5.926 - 5.950: 98.2683% ( 1) 00:14:23.060 5.973 - 5.997: 98.2760% ( 1) 00:14:23.060 6.021 - 6.044: 98.2837% ( 1) 00:14:23.060 6.068 - 6.116: 98.2913% ( 1) 00:14:23.060 6.258 - 6.305: 98.2990% ( 1) 00:14:23.060 6.305 - 6.353: 98.3066% ( 1) 00:14:23.060 6.495 - 6.542: 98.3143% ( 1) 00:14:23.060 6.542 - 6.590: 98.3373% ( 3) 00:14:23.060 6.637 - 6.684: 98.3450% ( 1) 00:14:23.060 6.684 - 6.732: 98.3526% ( 1) 00:14:23.060 6.779 - 6.827: 98.3833% ( 4) 00:14:23.060 6.969 - 7.016: 98.3909% ( 1) 00:14:23.060 7.064 - 7.111: 98.3986% ( 1) 00:14:23.060 7.111 - 7.159: 98.4063% ( 1) 00:14:23.060 7.159 - 7.206: 98.4139% ( 1) 00:14:23.060 7.253 - 7.301: 98.4292% ( 2) 00:14:23.060 7.348 - 7.396: 98.4369% ( 1) 00:14:23.060 7.396 - 7.443: 98.4446% ( 1) 00:14:23.060 7.585 - 7.633: 98.4522% ( 1) 00:14:23.060 7.633 - 7.680: 98.4676% ( 2) 00:14:23.060 7.680 - 7.727: 98.4982% ( 4) 00:14:23.060 7.775 - 7.822: 98.5212% ( 3) 00:14:23.060 7.870 - 7.917: 98.5365% ( 2) 00:14:23.060 7.917 - 7.964: 98.5518% ( 2) 00:14:23.060 7.964 - 8.012: 98.5595% ( 1) 00:14:23.060 8.107 - 8.154: 98.5748% ( 2) 00:14:23.060 8.249 - 8.296: 98.5901% ( 2) 00:14:23.060 8.296 - 8.344: 98.5978% ( 1) 00:14:23.060 8.391 - 8.439: 98.6131% ( 2) 00:14:23.060 8.439 - 8.486: 98.6285% ( 2) 00:14:23.060 8.723 - 8.770: 98.6361% ( 1) 00:14:23.060 8.770 - 8.818: 98.6438% ( 1) 00:14:23.060 8.818 - 8.865: 98.6514% ( 1) 00:14:23.060 9.197 - 9.244: 98.6591% ( 1) 00:14:23.060 9.766 - 9.813: 98.6668% ( 1) 00:14:23.060 9.956 - 10.003: 98.6744% ( 1) 00:14:23.060 10.050 - 10.098: 98.6821% ( 1) 00:14:23.060 10.572 - 10.619: 98.6974% ( 2) 00:14:23.060 10.809 - 10.856: 98.7051% ( 1) 00:14:23.060 10.904 - 10.951: 98.7204% ( 2) 00:14:23.060 11.236 - 11.283: 98.7281% ( 1) 00:14:23.060 11.378 - 11.425: 98.7357% ( 1) 00:14:23.060 11.994 - 12.041: 98.7434% ( 1) 00:14:23.060 12.326 - 12.421: 98.7511% ( 1) 00:14:23.060 12.421 - 12.516: 98.7587% ( 1) 00:14:23.060 12.895 - 12.990: 98.7817% ( 3) 00:14:23.060 12.990 - 13.084: 98.7894% ( 1) 00:14:23.060 13.559 - 13.653: 98.8047% ( 2) 00:14:23.060 13.748 - 13.843: 98.8124% ( 1) 00:14:23.060 14.033 - 14.127: 98.8200% ( 1) 00:14:23.060 14.507 - 14.601: 98.8277% ( 1) 00:14:23.060 14.696 - 14.791: 98.8353% ( 1) 00:14:23.060 14.791 - 14.886: 98.8430% ( 1) 00:14:23.060 16.972 - 17.067: 98.8583% ( 2) 00:14:23.060 17.067 - 17.161: 98.8660% ( 1) 00:14:23.060 17.161 - 17.256: 98.8736% ( 1) 00:14:23.060 17.256 - 17.351: 98.8813% ( 1) 00:14:23.060 17.351 - 17.446: 98.8890% ( 1) 00:14:23.060 17.446 - 17.541: 98.9426% ( 7) 00:14:23.060 17.541 - 17.636: 99.0039% ( 8) 00:14:23.060 17.636 - 17.730: 99.0116% ( 1) 00:14:23.060 17.730 - 17.825: 99.0805% ( 9) 00:14:23.060 17.825 - 17.920: 99.1418% ( 8) 00:14:23.060 17.920 - 18.015: 99.2108% ( 9) 00:14:23.060 18.015 - 18.110: 99.2644% ( 7) 00:14:23.060 18.110 - 18.204: 99.3181% ( 7) 00:14:23.060 18.204 - 18.299: 99.3717% ( 7) 00:14:23.060 18.299 - 18.394: 99.4253% ( 7) 00:14:23.060 18.394 - 18.489: 99.5173% ( 12) 00:14:23.060 18.489 - 18.584: 99.5403% ( 3) 00:14:23.060 18.584 - 18.679: 99.6092% ( 9) 00:14:23.060 18.679 - 18.773: 99.6475% ( 5) 00:14:23.060 18.773 - 18.868: 99.6782% ( 4) 00:14:23.060 18.868 - 18.963: 99.6935% ( 2) 00:14:23.060 18.963 - 19.058: 99.7088% ( 2) 00:14:23.060 19.058 - 19.153: 99.7242% ( 2) 00:14:23.060 19.153 - 19.247: 99.7471% ( 3) 00:14:23.060 19.247 - 19.342: 99.7548% ( 1) 00:14:23.060 19.342 - 19.437: 99.7855% ( 4) 00:14:23.060 19.437 - 19.532: 99.7931% ( 1) 00:14:23.060 19.532 - 19.627: 99.8161% ( 3) 00:14:23.060 19.627 - 19.721: 99.8314% ( 2) 00:14:23.060 19.721 - 19.816: 99.8391% ( 1) 00:14:23.060 19.911 - 20.006: 99.8544% ( 2) 00:14:23.060 20.006 - 20.101: 99.8621% ( 1) 00:14:23.060 20.290 - 20.385: 99.8697% ( 1) 00:14:23.060 21.144 - 21.239: 99.8774% ( 1) 00:14:23.060 21.333 - 21.428: 99.8851% ( 1) 00:14:23.060 22.756 - 22.850: 99.8927% ( 1) 00:14:23.060 22.945 - 23.040: 99.9004% ( 1) 00:14:23.060 3980.705 - 4004.978: 99.9464% ( 6) 00:14:23.060 4004.978 - 4029.250: 100.0000% ( 7) 00:14:23.060 00:14:23.060 Complete histogram 00:14:23.060 ================== 00:14:23.060 Range in us Cumulative Count 00:14:23.060 2.062 - 2.074: 9.3633% ( 1222) 00:14:23.060 2.074 - 2.086: 44.0579% ( 4528) 00:14:23.060 2.086 - 2.098: 47.3374% ( 428) 00:14:23.061 2.098 - 2.110: 53.4289% ( 795) 00:14:23.061 2.110 - 2.121: 59.9418% ( 850) 00:14:23.061 2.121 - 2.133: 61.5202% ( 206) 00:14:23.061 2.133 - 2.145: 68.7227% ( 940) 00:14:23.061 2.145 - 2.157: 76.1321% ( 967) 00:14:23.061 2.157 - 2.169: 76.8830% ( 98) 00:14:23.061 2.169 - 2.181: 79.4345% ( 333) 00:14:23.061 2.181 - 2.193: 81.2965% ( 243) 00:14:23.061 2.193 - 2.204: 81.7255% ( 56) 00:14:23.061 2.204 - 2.216: 84.2771% ( 333) 00:14:23.061 2.216 - 2.228: 88.2844% ( 523) 00:14:23.061 2.228 - 2.240: 90.1004% ( 237) 00:14:23.061 2.240 - 2.252: 91.4719% ( 179) 00:14:23.061 2.252 - 2.264: 92.3148% ( 110) 00:14:23.061 2.264 - 2.276: 92.6289% ( 41) 00:14:23.061 2.276 - 2.287: 92.9661% ( 44) 00:14:23.061 2.287 - 2.299: 93.4794% ( 67) 00:14:23.061 2.299 - 2.311: 94.1460% ( 87) 00:14:23.061 2.311 - 2.323: 94.5138% ( 48) 00:14:23.061 2.323 - 2.335: 94.7360% ( 29) 00:14:23.061 2.335 - 2.347: 94.9506% ( 28) 00:14:23.061 2.347 - 2.359: 95.3643% ( 54) 00:14:23.061 2.359 - 2.370: 95.8547% ( 64) 00:14:23.061 2.370 - 2.382: 96.3528% ( 65) 00:14:23.061 2.382 - 2.394: 96.8585% ( 66) 00:14:23.061 2.394 - 2.406: 97.1803% ( 42) 00:14:23.061 2.406 - 2.418: 97.2569% ( 10) 00:14:23.061 2.418 - 2.430: 97.3872% ( 17) 00:14:23.061 2.430 - 2.441: 97.5098% ( 16) 00:14:23.061 2.441 - 2.453: 97.5787% ( 9) 00:14:23.061 2.453 - 2.465: 97.7167% ( 18) 00:14:23.061 2.465 - 2.477: 97.7856% ( 9) 00:14:23.061 2.477 - 2.489: 97.8316% ( 6) 00:14:23.061 2.489 - 2.501: 97.8852% ( 7) 00:14:23.061 2.501 - 2.513: 97.9312% ( 6) 00:14:23.061 2.513 - 2.524: 97.9618% ( 4) 00:14:23.061 2.524 - 2.536: 98.0385% ( 10) 00:14:23.061 2.536 - 2.548: 98.0615% ( 3) 00:14:23.061 2.548 - 2.560: 98.0998% ( 5) 00:14:23.061 2.560 - 2.572: 98.1151% ( 2) 00:14:23.061 2.572 - 2.584: 98.1534% ( 5) 00:14:23.061 2.584 - 2.596: 98.1764% ( 3) 00:14:23.061 2.596 - 2.607: 98.1994% ( 3) 00:14:23.061 2.607 - 2.619: 98.2530% ( 7) 00:14:23.061 2.619 - 2.631: 98.2760% ( 3) 00:14:23.061 2.631 - 2.643: 98.2990% ( 3) 00:14:23.061 2.643 - 2.655: 98.3220% ( 3) 00:14:23.061 2.655 - 2.667: 98.3296% ( 1) 00:14:23.061 2.667 - 2.679: 98.3450% ( 2) 00:14:23.061 2.690 - 2.702: 98.3603% ( 2) 00:14:23.061 2.750 - 2.761: 98.3679% ( 1) 00:14:23.061 2.761 - 2.773: 98.3909% ( 3) 00:14:23.061 2.773 - 2.785: 98.4063% ( 2) 00:14:23.061 2.785 - 2.797: 98.4139% ( 1) 00:14:23.061 2.797 - 2.809: 98.4216% ( 1) 00:14:23.061 2.809 - 2.821: 98.4522% ( 4) 00:14:23.061 2.821 - 2.833: 98.4599% ( 1) 00:14:23.061 2.844 - 2.856: 98.4676% ( 1) 00:14:23.061 2.987 - 2.999: 98.4752% ( 1) 00:14:23.061 3.153 - 3.176: 98.4829% ( 1) 00:14:23.061 3.247 - 3.271: 98.4905% ( 1) 00:14:23.061 3.271 - 3.295: 98.4982% ( 1) 00:14:23.061 3.295 - 3.319: 98.5059% ( 1) 00:14:23.061 3.319 - 3.342: 98.5135% ( 1) 00:14:23.061 3.342 - 3.366: 98.5212% ( 1) 00:14:23.061 3.413 - 3.437: 98.5365% ( 2) 00:14:23.061 3.437 - 3.461: 98.5442% ( 1) 00:14:23.061 3.461 - 3.484: 98.5595% ( 2) 00:14:23.061 3.508 - 3.532: 98.5672% ( 1) 00:14:23.061 3.579 - 3.603: 98.5901% ( 3) 00:14:23.061 3.603 - 3.627: 98.6131% ( 3) 00:14:23.061 3.840 - 3.864: 9[2024-07-15 09:47:39.838765] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:23.319 8.6208% ( 1) 00:14:23.319 4.148 - 4.172: 98.6285% ( 1) 00:14:23.319 4.717 - 4.741: 98.6361% ( 1) 00:14:23.319 4.859 - 4.883: 98.6438% ( 1) 00:14:23.319 5.001 - 5.025: 98.6514% ( 1) 00:14:23.319 5.025 - 5.049: 98.6591% ( 1) 00:14:23.319 5.120 - 5.144: 98.6668% ( 1) 00:14:23.319 5.144 - 5.167: 98.6744% ( 1) 00:14:23.319 5.286 - 5.310: 98.6898% ( 2) 00:14:23.319 5.499 - 5.523: 98.6974% ( 1) 00:14:23.319 5.570 - 5.594: 98.7051% ( 1) 00:14:23.319 5.736 - 5.760: 98.7127% ( 1) 00:14:23.319 5.997 - 6.021: 98.7204% ( 1) 00:14:23.319 6.044 - 6.068: 98.7281% ( 1) 00:14:23.319 6.116 - 6.163: 98.7357% ( 1) 00:14:23.319 6.163 - 6.210: 98.7434% ( 1) 00:14:23.319 6.447 - 6.495: 98.7511% ( 1) 00:14:23.319 6.590 - 6.637: 98.7664% ( 2) 00:14:23.319 6.637 - 6.684: 98.7894% ( 3) 00:14:23.319 6.921 - 6.969: 98.7970% ( 1) 00:14:23.319 6.969 - 7.016: 98.8047% ( 1) 00:14:23.319 7.253 - 7.301: 98.8124% ( 1) 00:14:23.319 7.917 - 7.964: 98.8200% ( 1) 00:14:23.319 8.107 - 8.154: 98.8277% ( 1) 00:14:23.319 8.249 - 8.296: 98.8353% ( 1) 00:14:23.319 10.714 - 10.761: 98.8430% ( 1) 00:14:23.319 11.283 - 11.330: 98.8507% ( 1) 00:14:23.319 15.360 - 15.455: 98.8583% ( 1) 00:14:23.319 15.455 - 15.550: 98.8660% ( 1) 00:14:23.319 15.550 - 15.644: 98.8736% ( 1) 00:14:23.319 15.644 - 15.739: 98.8890% ( 2) 00:14:23.319 15.739 - 15.834: 98.9120% ( 3) 00:14:23.319 15.834 - 15.929: 98.9349% ( 3) 00:14:23.319 15.929 - 16.024: 98.9503% ( 2) 00:14:23.319 16.024 - 16.119: 98.9809% ( 4) 00:14:23.319 16.119 - 16.213: 99.0116% ( 4) 00:14:23.319 16.213 - 16.308: 99.0499% ( 5) 00:14:23.319 16.308 - 16.403: 99.0652% ( 2) 00:14:23.319 16.403 - 16.498: 99.1035% ( 5) 00:14:23.319 16.498 - 16.593: 99.1342% ( 4) 00:14:23.319 16.593 - 16.687: 99.1495% ( 2) 00:14:23.319 16.687 - 16.782: 99.1955% ( 6) 00:14:23.319 16.782 - 16.877: 99.2185% ( 3) 00:14:23.319 16.877 - 16.972: 99.2338% ( 2) 00:14:23.319 16.972 - 17.067: 99.2644% ( 4) 00:14:23.319 17.067 - 17.161: 99.2721% ( 1) 00:14:23.319 17.161 - 17.256: 99.2951% ( 3) 00:14:23.319 17.351 - 17.446: 99.3257% ( 4) 00:14:23.319 17.446 - 17.541: 99.3334% ( 1) 00:14:23.319 17.541 - 17.636: 99.3487% ( 2) 00:14:23.319 17.825 - 17.920: 99.3640% ( 2) 00:14:23.319 17.920 - 18.015: 99.3717% ( 1) 00:14:23.319 18.394 - 18.489: 99.3794% ( 1) 00:14:23.319 18.679 - 18.773: 99.4023% ( 3) 00:14:23.319 3980.705 - 4004.978: 99.7548% ( 46) 00:14:23.319 4004.978 - 4029.250: 100.0000% ( 32) 00:14:23.319 00:14:23.319 09:47:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:23.319 09:47:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:23.319 09:47:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:23.319 09:47:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:23.319 09:47:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:23.576 [ 00:14:23.576 { 00:14:23.577 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:23.577 "subtype": "Discovery", 00:14:23.577 "listen_addresses": [], 00:14:23.577 "allow_any_host": true, 00:14:23.577 "hosts": [] 00:14:23.577 }, 00:14:23.577 { 00:14:23.577 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:23.577 "subtype": "NVMe", 00:14:23.577 "listen_addresses": [ 00:14:23.577 { 00:14:23.577 "trtype": "VFIOUSER", 00:14:23.577 "adrfam": "IPv4", 00:14:23.577 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:23.577 "trsvcid": "0" 00:14:23.577 } 00:14:23.577 ], 00:14:23.577 "allow_any_host": true, 00:14:23.577 "hosts": [], 00:14:23.577 "serial_number": "SPDK1", 00:14:23.577 "model_number": "SPDK bdev Controller", 00:14:23.577 "max_namespaces": 32, 00:14:23.577 "min_cntlid": 1, 00:14:23.577 "max_cntlid": 65519, 00:14:23.577 "namespaces": [ 00:14:23.577 { 00:14:23.577 "nsid": 1, 00:14:23.577 "bdev_name": "Malloc1", 00:14:23.577 "name": "Malloc1", 00:14:23.577 "nguid": "7A707BA1D01A4116BA74823A4EA58C6B", 00:14:23.577 "uuid": "7a707ba1-d01a-4116-ba74-823a4ea58c6b" 00:14:23.577 }, 00:14:23.577 { 00:14:23.577 "nsid": 2, 00:14:23.577 "bdev_name": "Malloc3", 00:14:23.577 "name": "Malloc3", 00:14:23.577 "nguid": "5EF1449F2D34470384BFAA8871AF4F31", 00:14:23.577 "uuid": "5ef1449f-2d34-4703-84bf-aa8871af4f31" 00:14:23.577 } 00:14:23.577 ] 00:14:23.577 }, 00:14:23.577 { 00:14:23.577 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:23.577 "subtype": "NVMe", 00:14:23.577 "listen_addresses": [ 00:14:23.577 { 00:14:23.577 "trtype": "VFIOUSER", 00:14:23.577 "adrfam": "IPv4", 00:14:23.577 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:23.577 "trsvcid": "0" 00:14:23.577 } 00:14:23.577 ], 00:14:23.577 "allow_any_host": true, 00:14:23.577 "hosts": [], 00:14:23.577 "serial_number": "SPDK2", 00:14:23.577 "model_number": "SPDK bdev Controller", 00:14:23.577 "max_namespaces": 32, 00:14:23.577 "min_cntlid": 1, 00:14:23.577 "max_cntlid": 65519, 00:14:23.577 "namespaces": [ 00:14:23.577 { 00:14:23.577 "nsid": 1, 00:14:23.577 "bdev_name": "Malloc2", 00:14:23.577 "name": "Malloc2", 00:14:23.577 "nguid": "D05F88AC6BC54DE9A9D870432FD38B6D", 00:14:23.577 "uuid": "d05f88ac-6bc5-4de9-a9d8-70432fd38b6d" 00:14:23.577 } 00:14:23.577 ] 00:14:23.577 } 00:14:23.577 ] 00:14:23.577 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:23.577 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1865345 00:14:23.577 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:23.577 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:23.577 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:23.577 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:23.577 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:23.577 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:23.577 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:23.577 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:23.577 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.577 [2024-07-15 09:47:40.317385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:23.834 Malloc4 00:14:23.834 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:24.092 [2024-07-15 09:47:40.660969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.092 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:24.092 Asynchronous Event Request test 00:14:24.092 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.092 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.092 Registering asynchronous event callbacks... 00:14:24.092 Starting namespace attribute notice tests for all controllers... 00:14:24.092 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:24.092 aer_cb - Changed Namespace 00:14:24.092 Cleaning up... 00:14:24.349 [ 00:14:24.349 { 00:14:24.349 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:24.349 "subtype": "Discovery", 00:14:24.349 "listen_addresses": [], 00:14:24.349 "allow_any_host": true, 00:14:24.349 "hosts": [] 00:14:24.349 }, 00:14:24.349 { 00:14:24.349 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:24.349 "subtype": "NVMe", 00:14:24.349 "listen_addresses": [ 00:14:24.349 { 00:14:24.349 "trtype": "VFIOUSER", 00:14:24.349 "adrfam": "IPv4", 00:14:24.349 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:24.349 "trsvcid": "0" 00:14:24.349 } 00:14:24.349 ], 00:14:24.349 "allow_any_host": true, 00:14:24.349 "hosts": [], 00:14:24.349 "serial_number": "SPDK1", 00:14:24.349 "model_number": "SPDK bdev Controller", 00:14:24.349 "max_namespaces": 32, 00:14:24.349 "min_cntlid": 1, 00:14:24.349 "max_cntlid": 65519, 00:14:24.349 "namespaces": [ 00:14:24.349 { 00:14:24.349 "nsid": 1, 00:14:24.349 "bdev_name": "Malloc1", 00:14:24.349 "name": "Malloc1", 00:14:24.349 "nguid": "7A707BA1D01A4116BA74823A4EA58C6B", 00:14:24.349 "uuid": "7a707ba1-d01a-4116-ba74-823a4ea58c6b" 00:14:24.349 }, 00:14:24.349 { 00:14:24.349 "nsid": 2, 00:14:24.349 "bdev_name": "Malloc3", 00:14:24.349 "name": "Malloc3", 00:14:24.349 "nguid": "5EF1449F2D34470384BFAA8871AF4F31", 00:14:24.349 "uuid": "5ef1449f-2d34-4703-84bf-aa8871af4f31" 00:14:24.349 } 00:14:24.349 ] 00:14:24.349 }, 00:14:24.349 { 00:14:24.349 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:24.349 "subtype": "NVMe", 00:14:24.349 "listen_addresses": [ 00:14:24.349 { 00:14:24.349 "trtype": "VFIOUSER", 00:14:24.349 "adrfam": "IPv4", 00:14:24.349 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:24.349 "trsvcid": "0" 00:14:24.349 } 00:14:24.349 ], 00:14:24.349 "allow_any_host": true, 00:14:24.349 "hosts": [], 00:14:24.349 "serial_number": "SPDK2", 00:14:24.349 "model_number": "SPDK bdev Controller", 00:14:24.349 "max_namespaces": 32, 00:14:24.349 "min_cntlid": 1, 00:14:24.349 "max_cntlid": 65519, 00:14:24.349 "namespaces": [ 00:14:24.349 { 00:14:24.349 "nsid": 1, 00:14:24.349 "bdev_name": "Malloc2", 00:14:24.349 "name": "Malloc2", 00:14:24.349 "nguid": "D05F88AC6BC54DE9A9D870432FD38B6D", 00:14:24.349 "uuid": "d05f88ac-6bc5-4de9-a9d8-70432fd38b6d" 00:14:24.349 }, 00:14:24.349 { 00:14:24.349 "nsid": 2, 00:14:24.349 "bdev_name": "Malloc4", 00:14:24.349 "name": "Malloc4", 00:14:24.349 "nguid": "2DB79AD64DE34A519B04EDB614BB4A40", 00:14:24.349 "uuid": "2db79ad6-4de3-4a51-9b04-edb614bb4a40" 00:14:24.349 } 00:14:24.349 ] 00:14:24.349 } 00:14:24.349 ] 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1865345 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1859754 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1859754 ']' 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1859754 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1859754 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1859754' 00:14:24.349 killing process with pid 1859754 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1859754 00:14:24.349 09:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1859754 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1865487 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1865487' 00:14:24.606 Process pid: 1865487 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1865487 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1865487 ']' 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:24.606 09:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:24.606 [2024-07-15 09:47:41.323921] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:24.606 [2024-07-15 09:47:41.324978] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:24.606 [2024-07-15 09:47:41.325034] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.606 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.606 [2024-07-15 09:47:41.357149] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:24.606 [2024-07-15 09:47:41.387700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:24.865 [2024-07-15 09:47:41.476194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.865 [2024-07-15 09:47:41.476254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.865 [2024-07-15 09:47:41.476269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.865 [2024-07-15 09:47:41.476282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.865 [2024-07-15 09:47:41.476294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.865 [2024-07-15 09:47:41.476377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.865 [2024-07-15 09:47:41.476434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.865 [2024-07-15 09:47:41.476548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.865 [2024-07-15 09:47:41.476550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.865 [2024-07-15 09:47:41.577526] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:24.865 [2024-07-15 09:47:41.577756] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:24.865 [2024-07-15 09:47:41.578063] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:24.865 [2024-07-15 09:47:41.578694] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:24.865 [2024-07-15 09:47:41.578962] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:24.865 09:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.865 09:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:24.865 09:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:26.237 09:47:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:26.237 09:47:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:26.237 09:47:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:26.237 09:47:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:26.237 09:47:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:26.237 09:47:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:26.496 Malloc1 00:14:26.496 09:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:26.754 09:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:27.011 09:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:27.269 09:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:27.269 09:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:27.269 09:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:27.527 Malloc2 00:14:27.527 09:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:27.785 09:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:28.043 09:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1865487 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1865487 ']' 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1865487 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1865487 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1865487' 00:14:28.301 killing process with pid 1865487 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1865487 00:14:28.301 09:47:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1865487 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:28.560 00:14:28.560 real 0m52.369s 00:14:28.560 user 3m26.660s 00:14:28.560 sys 0m4.376s 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:28.560 ************************************ 00:14:28.560 END TEST nvmf_vfio_user 00:14:28.560 ************************************ 00:14:28.560 09:47:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:28.560 09:47:45 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:28.560 09:47:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:28.560 09:47:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.560 09:47:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:28.560 ************************************ 00:14:28.560 START TEST nvmf_vfio_user_nvme_compliance 00:14:28.560 ************************************ 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:28.560 * Looking for test storage... 00:14:28.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:28.560 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1866030 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1866030' 00:14:28.821 Process pid: 1866030 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1866030 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1866030 ']' 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.821 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:28.821 [2024-07-15 09:47:45.392297] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:28.821 [2024-07-15 09:47:45.392414] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.821 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.821 [2024-07-15 09:47:45.428748] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:28.821 [2024-07-15 09:47:45.457367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:28.821 [2024-07-15 09:47:45.542826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.821 [2024-07-15 09:47:45.542896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.821 [2024-07-15 09:47:45.542912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.821 [2024-07-15 09:47:45.542923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.821 [2024-07-15 09:47:45.542933] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.821 [2024-07-15 09:47:45.543015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.821 [2024-07-15 09:47:45.543042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.821 [2024-07-15 09:47:45.543046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.081 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.081 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:14:29.081 09:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:30.018 malloc0 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:30.018 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.019 09:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:30.019 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.277 00:14:30.277 00:14:30.277 CUnit - A unit testing framework for C - Version 2.1-3 00:14:30.277 http://cunit.sourceforge.net/ 00:14:30.277 00:14:30.277 00:14:30.277 Suite: nvme_compliance 00:14:30.277 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 09:47:46.878403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.277 [2024-07-15 09:47:46.879799] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:30.277 [2024-07-15 09:47:46.879823] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:30.277 [2024-07-15 09:47:46.879850] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:30.277 [2024-07-15 09:47:46.881421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.277 passed 00:14:30.277 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 09:47:46.966992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.277 [2024-07-15 09:47:46.970016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.277 passed 00:14:30.277 Test: admin_identify_ns ...[2024-07-15 09:47:47.056471] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.536 [2024-07-15 09:47:47.116894] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:30.536 [2024-07-15 09:47:47.124907] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:30.536 [2024-07-15 09:47:47.146003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.536 passed 00:14:30.536 Test: admin_get_features_mandatory_features ...[2024-07-15 09:47:47.229615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.536 [2024-07-15 09:47:47.232633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.536 passed 00:14:30.536 Test: admin_get_features_optional_features ...[2024-07-15 09:47:47.315168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.536 [2024-07-15 09:47:47.318199] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.801 passed 00:14:30.801 Test: admin_set_features_number_of_queues ...[2024-07-15 09:47:47.402398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.801 [2024-07-15 09:47:47.507004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.801 passed 00:14:31.092 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 09:47:47.590152] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.092 [2024-07-15 09:47:47.593195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.092 passed 00:14:31.092 Test: admin_get_log_page_with_lpo ...[2024-07-15 09:47:47.677395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.092 [2024-07-15 09:47:47.744892] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:31.092 [2024-07-15 09:47:47.757970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.092 passed 00:14:31.092 Test: fabric_property_get ...[2024-07-15 09:47:47.841592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.092 [2024-07-15 09:47:47.842852] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:31.092 [2024-07-15 09:47:47.844615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.353 passed 00:14:31.353 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 09:47:47.928158] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.353 [2024-07-15 09:47:47.929471] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:31.353 [2024-07-15 09:47:47.931195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.353 passed 00:14:31.353 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 09:47:48.017261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.353 [2024-07-15 09:47:48.100902] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:31.353 [2024-07-15 09:47:48.116899] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:31.353 [2024-07-15 09:47:48.121993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.611 passed 00:14:31.611 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 09:47:48.205925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.611 [2024-07-15 09:47:48.207222] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:31.611 [2024-07-15 09:47:48.208946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.611 passed 00:14:31.611 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 09:47:48.292122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.611 [2024-07-15 09:47:48.367902] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:31.611 [2024-07-15 09:47:48.391890] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:31.869 [2024-07-15 09:47:48.397009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.869 passed 00:14:31.869 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 09:47:48.481064] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.869 [2024-07-15 09:47:48.482366] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:31.869 [2024-07-15 09:47:48.482417] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:31.869 [2024-07-15 09:47:48.484087] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.869 passed 00:14:31.869 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 09:47:48.567002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.128 [2024-07-15 09:47:48.661885] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:32.128 [2024-07-15 09:47:48.669889] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:32.128 [2024-07-15 09:47:48.677903] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:32.128 [2024-07-15 09:47:48.685901] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:32.128 [2024-07-15 09:47:48.715013] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.128 passed 00:14:32.128 Test: admin_create_io_sq_verify_pc ...[2024-07-15 09:47:48.798898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.128 [2024-07-15 09:47:48.816899] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:32.128 [2024-07-15 09:47:48.834293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.128 passed 00:14:32.387 Test: admin_create_io_qp_max_qps ...[2024-07-15 09:47:48.916836] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.323 [2024-07-15 09:47:50.011894] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:33.892 [2024-07-15 09:47:50.396561] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.892 passed 00:14:33.892 Test: admin_create_io_sq_shared_cq ...[2024-07-15 09:47:50.480895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.892 [2024-07-15 09:47:50.614883] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:33.892 [2024-07-15 09:47:50.651957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.152 passed 00:14:34.152 00:14:34.152 Run Summary: Type Total Ran Passed Failed Inactive 00:14:34.152 suites 1 1 n/a 0 0 00:14:34.152 tests 18 18 18 0 0 00:14:34.152 asserts 360 360 360 0 n/a 00:14:34.152 00:14:34.152 Elapsed time = 1.565 seconds 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1866030 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1866030 ']' 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1866030 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1866030 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1866030' 00:14:34.152 killing process with pid 1866030 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1866030 00:14:34.152 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1866030 00:14:34.411 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:34.411 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:34.411 00:14:34.411 real 0m5.713s 00:14:34.411 user 0m16.071s 00:14:34.411 sys 0m0.565s 00:14:34.411 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.411 09:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.411 ************************************ 00:14:34.411 END TEST nvmf_vfio_user_nvme_compliance 00:14:34.411 ************************************ 00:14:34.411 09:47:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:34.411 09:47:51 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:34.411 09:47:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:34.411 09:47:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.411 09:47:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:34.411 ************************************ 00:14:34.411 START TEST nvmf_vfio_user_fuzz 00:14:34.411 ************************************ 00:14:34.411 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:34.411 * Looking for test storage... 00:14:34.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.411 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.411 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:34.411 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.411 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.411 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.411 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1866801 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1866801' 00:14:34.412 Process pid: 1866801 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1866801 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1866801 ']' 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.412 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:34.671 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.671 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:14:34.671 09:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:36.052 malloc0 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:36.052 09:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:08.122 Fuzzing completed. Shutting down the fuzz application 00:15:08.122 00:15:08.122 Dumping successful admin opcodes: 00:15:08.122 8, 9, 10, 24, 00:15:08.122 Dumping successful io opcodes: 00:15:08.122 0, 00:15:08.122 NS: 0x200003a1ef00 I/O qp, Total commands completed: 580766, total successful commands: 2231, random_seed: 2921266624 00:15:08.122 NS: 0x200003a1ef00 admin qp, Total commands completed: 105616, total successful commands: 870, random_seed: 2024810688 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1866801 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1866801 ']' 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1866801 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1866801 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1866801' 00:15:08.122 killing process with pid 1866801 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1866801 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1866801 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:08.122 00:15:08.122 real 0m32.541s 00:15:08.122 user 0m31.243s 00:15:08.122 sys 0m28.915s 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.122 09:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.122 ************************************ 00:15:08.122 END TEST nvmf_vfio_user_fuzz 00:15:08.122 ************************************ 00:15:08.122 09:48:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:08.122 09:48:23 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:08.122 09:48:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:08.122 09:48:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.122 09:48:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:08.122 ************************************ 00:15:08.122 START TEST nvmf_host_management 00:15:08.122 ************************************ 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:08.122 * Looking for test storage... 00:15:08.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:08.122 09:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:09.057 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:09.057 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:09.057 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:09.057 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:09.057 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:09.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:15:09.058 00:15:09.058 --- 10.0.0.2 ping statistics --- 00:15:09.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.058 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:15:09.058 00:15:09.058 --- 10.0.0.1 ping statistics --- 00:15:09.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.058 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1872859 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1872859 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1872859 ']' 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.058 09:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:09.058 [2024-07-15 09:48:25.836745] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:09.058 [2024-07-15 09:48:25.836829] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.315 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.315 [2024-07-15 09:48:25.875898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:09.315 [2024-07-15 09:48:25.908430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.315 [2024-07-15 09:48:26.009954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.315 [2024-07-15 09:48:26.010008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.315 [2024-07-15 09:48:26.010024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.315 [2024-07-15 09:48:26.010037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.315 [2024-07-15 09:48:26.010049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.315 [2024-07-15 09:48:26.010149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.315 [2024-07-15 09:48:26.010187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.315 [2024-07-15 09:48:26.010234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:09.315 [2024-07-15 09:48:26.010237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:09.573 [2024-07-15 09:48:26.172743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:09.573 Malloc0 00:15:09.573 [2024-07-15 09:48:26.233769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1872913 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1872913 /var/tmp/bdevperf.sock 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1872913 ']' 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:09.573 { 00:15:09.573 "params": { 00:15:09.573 "name": "Nvme$subsystem", 00:15:09.573 "trtype": "$TEST_TRANSPORT", 00:15:09.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.573 "adrfam": "ipv4", 00:15:09.573 "trsvcid": "$NVMF_PORT", 00:15:09.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.573 "hdgst": ${hdgst:-false}, 00:15:09.573 "ddgst": ${ddgst:-false} 00:15:09.573 }, 00:15:09.573 "method": "bdev_nvme_attach_controller" 00:15:09.573 } 00:15:09.573 EOF 00:15:09.573 )") 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:09.573 09:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:09.573 "params": { 00:15:09.573 "name": "Nvme0", 00:15:09.573 "trtype": "tcp", 00:15:09.573 "traddr": "10.0.0.2", 00:15:09.573 "adrfam": "ipv4", 00:15:09.573 "trsvcid": "4420", 00:15:09.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:09.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:09.573 "hdgst": false, 00:15:09.573 "ddgst": false 00:15:09.573 }, 00:15:09.573 "method": "bdev_nvme_attach_controller" 00:15:09.573 }' 00:15:09.573 [2024-07-15 09:48:26.313721] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:09.573 [2024-07-15 09:48:26.313807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872913 ] 00:15:09.573 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.573 [2024-07-15 09:48:26.346250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:09.831 [2024-07-15 09:48:26.375714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.831 [2024-07-15 09:48:26.461870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.089 Running I/O for 10 seconds... 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:10.089 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:10.090 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:10.090 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:10.090 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:10.090 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:10.090 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:10.090 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.090 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:10.347 09:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.347 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:15:10.347 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:15:10.347 09:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:10.606 [2024-07-15 09:48:27.198771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 [2024-07-15 09:48:27.198890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 [2024-07-15 09:48:27.198908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 [2024-07-15 09:48:27.198933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 [2024-07-15 09:48:27.198958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 [2024-07-15 09:48:27.198971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 [2024-07-15 09:48:27.198983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 [2024-07-15 09:48:27.198995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 [2024-07-15 09:48:27.199006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 [2024-07-15 09:48:27.199018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 [2024-07-15 09:48:27.199030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865900 is same with the state(5) to be set 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:10.606 [2024-07-15 09:48:27.210672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.606 [2024-07-15 09:48:27.210720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.210748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.606 [2024-07-15 09:48:27.210772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.210790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.606 [2024-07-15 09:48:27.210805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.210820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.606 [2024-07-15 09:48:27.210834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.210849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0b50 is same with the state(5) to be set 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.606 09:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:10.606 [2024-07-15 09:48:27.211209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.211978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.211996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.212012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.212029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.606 [2024-07-15 09:48:27.212045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.606 [2024-07-15 09:48:27.212062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.212977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.212994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.607 [2024-07-15 09:48:27.213438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.607 [2024-07-15 09:48:27.213517] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27e1e10 was disconnected and freed. reset controller. 00:15:10.607 [2024-07-15 09:48:27.214650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:10.607 task offset: 73728 on job bdev=Nvme0n1 fails 00:15:10.607 00:15:10.607 Latency(us) 00:15:10.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.608 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:10.608 Job: Nvme0n1 ended in about 0.41 seconds with error 00:15:10.608 Verification LBA range: start 0x0 length 0x400 00:15:10.608 Nvme0n1 : 0.41 1396.23 87.26 155.14 0.00 40128.46 3252.53 36894.34 00:15:10.608 =================================================================================================================== 00:15:10.608 Total : 1396.23 87.26 155.14 0.00 40128.46 3252.53 36894.34 00:15:10.608 [2024-07-15 09:48:27.216517] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:10.608 [2024-07-15 09:48:27.216557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d0b50 (9): Bad file descriptor 00:15:10.608 [2024-07-15 09:48:27.223839] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1872913 00:15:11.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1872913) - No such process 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:11.544 { 00:15:11.544 "params": { 00:15:11.544 "name": "Nvme$subsystem", 00:15:11.544 "trtype": "$TEST_TRANSPORT", 00:15:11.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:11.544 "adrfam": "ipv4", 00:15:11.544 "trsvcid": "$NVMF_PORT", 00:15:11.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:11.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:11.544 "hdgst": ${hdgst:-false}, 00:15:11.544 "ddgst": ${ddgst:-false} 00:15:11.544 }, 00:15:11.544 "method": "bdev_nvme_attach_controller" 00:15:11.544 } 00:15:11.544 EOF 00:15:11.544 )") 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:11.544 09:48:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:11.544 "params": { 00:15:11.544 "name": "Nvme0", 00:15:11.544 "trtype": "tcp", 00:15:11.544 "traddr": "10.0.0.2", 00:15:11.544 "adrfam": "ipv4", 00:15:11.544 "trsvcid": "4420", 00:15:11.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:11.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:11.544 "hdgst": false, 00:15:11.544 "ddgst": false 00:15:11.544 }, 00:15:11.544 "method": "bdev_nvme_attach_controller" 00:15:11.544 }' 00:15:11.544 [2024-07-15 09:48:28.259083] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:11.544 [2024-07-15 09:48:28.259155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873185 ] 00:15:11.544 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.544 [2024-07-15 09:48:28.291434] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:11.544 [2024-07-15 09:48:28.320325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.820 [2024-07-15 09:48:28.407102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.820 Running I/O for 1 seconds... 00:15:13.197 00:15:13.197 Latency(us) 00:15:13.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.197 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:13.197 Verification LBA range: start 0x0 length 0x400 00:15:13.197 Nvme0n1 : 1.02 1449.18 90.57 0.00 0.00 43490.91 9854.67 38836.15 00:15:13.197 =================================================================================================================== 00:15:13.197 Total : 1449.18 90.57 0.00 0.00 43490.91 9854.67 38836.15 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.197 rmmod nvme_tcp 00:15:13.197 rmmod nvme_fabrics 00:15:13.197 rmmod nvme_keyring 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1872859 ']' 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1872859 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1872859 ']' 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1872859 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1872859 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1872859' 00:15:13.197 killing process with pid 1872859 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1872859 00:15:13.197 09:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1872859 00:15:13.455 [2024-07-15 09:48:30.145226] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:13.455 09:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.455 09:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:13.455 09:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:13.455 09:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.455 09:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.455 09:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.455 09:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.455 09:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.993 09:48:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:15.993 09:48:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:15.993 00:15:15.993 real 0m8.577s 00:15:15.993 user 0m19.393s 00:15:15.993 sys 0m2.596s 00:15:15.993 09:48:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.993 09:48:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:15.993 ************************************ 00:15:15.993 END TEST nvmf_host_management 00:15:15.993 ************************************ 00:15:15.993 09:48:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:15.993 09:48:32 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:15.993 09:48:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:15.993 09:48:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.993 09:48:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:15.993 ************************************ 00:15:15.993 START TEST nvmf_lvol 00:15:15.993 ************************************ 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:15.993 * Looking for test storage... 00:15:15.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.993 09:48:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:15.994 09:48:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:17.894 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:17.895 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:17.895 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:17.895 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:17.895 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:15:17.895 00:15:17.895 --- 10.0.0.2 ping statistics --- 00:15:17.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.895 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:15:17.895 00:15:17.895 --- 10.0.0.1 ping statistics --- 00:15:17.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.895 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1875282 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1875282 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1875282 ']' 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.895 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:17.895 [2024-07-15 09:48:34.532957] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:17.895 [2024-07-15 09:48:34.533054] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.895 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.895 [2024-07-15 09:48:34.571809] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:17.895 [2024-07-15 09:48:34.598589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:18.153 [2024-07-15 09:48:34.687142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.153 [2024-07-15 09:48:34.687200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.153 [2024-07-15 09:48:34.687228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.153 [2024-07-15 09:48:34.687240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.153 [2024-07-15 09:48:34.687249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.153 [2024-07-15 09:48:34.687301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.153 [2024-07-15 09:48:34.690895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.153 [2024-07-15 09:48:34.690906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.153 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.153 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:15:18.153 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:18.153 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:18.153 09:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:18.153 09:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.153 09:48:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:18.410 [2024-07-15 09:48:35.052807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.410 09:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:18.667 09:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:18.668 09:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:18.925 09:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:18.925 09:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:19.184 09:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:19.443 09:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5d9552a4-87ee-4e9e-9f04-43914db05641 00:15:19.443 09:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5d9552a4-87ee-4e9e-9f04-43914db05641 lvol 20 00:15:19.701 09:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e8a44691-b1c9-4b21-9418-add347bf395f 00:15:19.701 09:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:19.958 09:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e8a44691-b1c9-4b21-9418-add347bf395f 00:15:20.216 09:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:20.474 [2024-07-15 09:48:37.124976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.474 09:48:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:20.732 09:48:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1875702 00:15:20.732 09:48:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:20.732 09:48:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:20.732 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.667 09:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e8a44691-b1c9-4b21-9418-add347bf395f MY_SNAPSHOT 00:15:21.925 09:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3b42c129-7e91-42bd-89c1-bc3185e8e170 00:15:21.925 09:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e8a44691-b1c9-4b21-9418-add347bf395f 30 00:15:22.522 09:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3b42c129-7e91-42bd-89c1-bc3185e8e170 MY_CLONE 00:15:22.522 09:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=82a68feb-d5d1-4af0-8a71-bcb44ea54435 00:15:22.522 09:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 82a68feb-d5d1-4af0-8a71-bcb44ea54435 00:15:23.457 09:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1875702 00:15:31.571 Initializing NVMe Controllers 00:15:31.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:31.571 Controller IO queue size 128, less than required. 00:15:31.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:31.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:31.571 Initialization complete. Launching workers. 00:15:31.571 ======================================================== 00:15:31.571 Latency(us) 00:15:31.571 Device Information : IOPS MiB/s Average min max 00:15:31.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10766.40 42.06 11891.86 1750.24 77988.32 00:15:31.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10598.40 41.40 12082.88 2305.21 72634.54 00:15:31.571 ======================================================== 00:15:31.571 Total : 21364.80 83.46 11986.62 1750.24 77988.32 00:15:31.571 00:15:31.571 09:48:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:31.571 09:48:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e8a44691-b1c9-4b21-9418-add347bf395f 00:15:31.571 09:48:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d9552a4-87ee-4e9e-9f04-43914db05641 00:15:31.830 09:48:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:31.830 09:48:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:31.830 09:48:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:31.830 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.830 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:31.830 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.830 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:31.830 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.830 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.830 rmmod nvme_tcp 00:15:32.135 rmmod nvme_fabrics 00:15:32.135 rmmod nvme_keyring 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1875282 ']' 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1875282 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1875282 ']' 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1875282 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1875282 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1875282' 00:15:32.135 killing process with pid 1875282 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1875282 00:15:32.135 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1875282 00:15:32.395 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:32.395 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:32.395 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:32.395 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.395 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.395 09:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.395 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.395 09:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.303 09:48:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:34.303 00:15:34.303 real 0m18.718s 00:15:34.303 user 1m3.756s 00:15:34.303 sys 0m5.729s 00:15:34.303 09:48:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.303 09:48:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:34.303 ************************************ 00:15:34.303 END TEST nvmf_lvol 00:15:34.303 ************************************ 00:15:34.303 09:48:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:34.303 09:48:51 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:34.303 09:48:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:34.303 09:48:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.303 09:48:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:34.303 ************************************ 00:15:34.303 START TEST nvmf_lvs_grow 00:15:34.303 ************************************ 00:15:34.303 09:48:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:34.303 * Looking for test storage... 00:15:34.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.562 09:48:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.562 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:34.562 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.562 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.562 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.562 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.563 09:48:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.465 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:36.465 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:36.466 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:36.466 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:36.466 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:15:36.466 00:15:36.466 --- 10.0.0.2 ping statistics --- 00:15:36.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.466 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:15:36.466 00:15:36.466 --- 10.0.0.1 ping statistics --- 00:15:36.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.466 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1878958 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1878958 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1878958 ']' 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.466 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:36.466 [2024-07-15 09:48:53.236348] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:36.466 [2024-07-15 09:48:53.236429] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.724 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.724 [2024-07-15 09:48:53.273122] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:36.724 [2024-07-15 09:48:53.304310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.724 [2024-07-15 09:48:53.395978] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.724 [2024-07-15 09:48:53.396031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.724 [2024-07-15 09:48:53.396047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.724 [2024-07-15 09:48:53.396061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.724 [2024-07-15 09:48:53.396073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.724 [2024-07-15 09:48:53.396115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.981 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.981 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:36.981 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.981 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.981 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:36.981 09:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.981 09:48:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:37.239 [2024-07-15 09:48:53.773723] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:37.239 ************************************ 00:15:37.239 START TEST lvs_grow_clean 00:15:37.239 ************************************ 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:37.239 09:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:37.498 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:37.498 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:37.757 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:37.757 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:37.757 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:38.015 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:38.016 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:38.016 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 lvol 150 00:15:38.273 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c6c7166a-94b5-4d12-bc23-973257f1ff93 00:15:38.273 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:38.273 09:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:38.530 [2024-07-15 09:48:55.190290] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:38.530 [2024-07-15 09:48:55.190379] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:38.530 true 00:15:38.530 09:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:38.530 09:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:38.789 09:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:38.789 09:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:39.048 09:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6c7166a-94b5-4d12-bc23-973257f1ff93 00:15:39.308 09:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:39.565 [2024-07-15 09:48:56.197333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.565 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:39.822 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1879400 00:15:39.822 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:39.822 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1879400 /var/tmp/bdevperf.sock 00:15:39.822 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1879400 ']' 00:15:39.822 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:39.822 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.822 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.822 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.822 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.822 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:39.822 [2024-07-15 09:48:56.508818] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:39.822 [2024-07-15 09:48:56.508907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1879400 ] 00:15:39.822 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.822 [2024-07-15 09:48:56.543701] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:39.822 [2024-07-15 09:48:56.570805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.079 [2024-07-15 09:48:56.654365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.079 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.079 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:15:40.079 09:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:40.336 Nvme0n1 00:15:40.336 09:48:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:40.594 [ 00:15:40.594 { 00:15:40.594 "name": "Nvme0n1", 00:15:40.594 "aliases": [ 00:15:40.594 "c6c7166a-94b5-4d12-bc23-973257f1ff93" 00:15:40.594 ], 00:15:40.594 "product_name": "NVMe disk", 00:15:40.594 "block_size": 4096, 00:15:40.594 "num_blocks": 38912, 00:15:40.594 "uuid": "c6c7166a-94b5-4d12-bc23-973257f1ff93", 00:15:40.594 "assigned_rate_limits": { 00:15:40.594 "rw_ios_per_sec": 0, 00:15:40.594 "rw_mbytes_per_sec": 0, 00:15:40.594 "r_mbytes_per_sec": 0, 00:15:40.594 "w_mbytes_per_sec": 0 00:15:40.594 }, 00:15:40.594 "claimed": false, 00:15:40.594 "zoned": false, 00:15:40.594 "supported_io_types": { 00:15:40.594 "read": true, 00:15:40.594 "write": true, 00:15:40.594 "unmap": true, 00:15:40.594 "flush": true, 00:15:40.594 "reset": true, 00:15:40.594 "nvme_admin": true, 00:15:40.594 "nvme_io": true, 00:15:40.594 "nvme_io_md": false, 00:15:40.594 "write_zeroes": true, 00:15:40.594 "zcopy": false, 00:15:40.594 "get_zone_info": false, 00:15:40.594 "zone_management": false, 00:15:40.594 "zone_append": false, 00:15:40.594 "compare": true, 00:15:40.594 "compare_and_write": true, 00:15:40.594 "abort": true, 00:15:40.594 "seek_hole": false, 00:15:40.594 "seek_data": false, 00:15:40.594 "copy": true, 00:15:40.594 "nvme_iov_md": false 00:15:40.594 }, 00:15:40.594 "memory_domains": [ 00:15:40.594 { 00:15:40.594 "dma_device_id": "system", 00:15:40.594 "dma_device_type": 1 00:15:40.594 } 00:15:40.594 ], 00:15:40.594 "driver_specific": { 00:15:40.594 "nvme": [ 00:15:40.594 { 00:15:40.594 "trid": { 00:15:40.594 "trtype": "TCP", 00:15:40.594 "adrfam": "IPv4", 00:15:40.594 "traddr": "10.0.0.2", 00:15:40.594 "trsvcid": "4420", 00:15:40.594 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:40.594 }, 00:15:40.594 "ctrlr_data": { 00:15:40.594 "cntlid": 1, 00:15:40.594 "vendor_id": "0x8086", 00:15:40.594 "model_number": "SPDK bdev Controller", 00:15:40.594 "serial_number": "SPDK0", 00:15:40.594 "firmware_revision": "24.09", 00:15:40.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:40.594 "oacs": { 00:15:40.594 "security": 0, 00:15:40.594 "format": 0, 00:15:40.594 "firmware": 0, 00:15:40.594 "ns_manage": 0 00:15:40.594 }, 00:15:40.594 "multi_ctrlr": true, 00:15:40.594 "ana_reporting": false 00:15:40.594 }, 00:15:40.594 "vs": { 00:15:40.594 "nvme_version": "1.3" 00:15:40.594 }, 00:15:40.594 "ns_data": { 00:15:40.594 "id": 1, 00:15:40.594 "can_share": true 00:15:40.594 } 00:15:40.594 } 00:15:40.594 ], 00:15:40.594 "mp_policy": "active_passive" 00:15:40.594 } 00:15:40.594 } 00:15:40.594 ] 00:15:40.594 09:48:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1879413 00:15:40.594 09:48:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:40.594 09:48:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:40.853 Running I/O for 10 seconds... 00:15:41.791 Latency(us) 00:15:41.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.791 Nvme0n1 : 1.00 15225.00 59.47 0.00 0.00 0.00 0.00 0.00 00:15:41.791 =================================================================================================================== 00:15:41.791 Total : 15225.00 59.47 0.00 0.00 0.00 0.00 0.00 00:15:41.791 00:15:42.729 09:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:42.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:42.729 Nvme0n1 : 2.00 15122.50 59.07 0.00 0.00 0.00 0.00 0.00 00:15:42.729 =================================================================================================================== 00:15:42.729 Total : 15122.50 59.07 0.00 0.00 0.00 0.00 0.00 00:15:42.729 00:15:42.986 true 00:15:42.986 09:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:42.986 09:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:43.246 09:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:43.246 09:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:43.246 09:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1879413 00:15:43.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:43.815 Nvme0n1 : 3.00 15228.33 59.49 0.00 0.00 0.00 0.00 0.00 00:15:43.816 =================================================================================================================== 00:15:43.816 Total : 15228.33 59.49 0.00 0.00 0.00 0.00 0.00 00:15:43.816 00:15:44.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:44.754 Nvme0n1 : 4.00 15263.25 59.62 0.00 0.00 0.00 0.00 0.00 00:15:44.754 =================================================================================================================== 00:15:44.754 Total : 15263.25 59.62 0.00 0.00 0.00 0.00 0.00 00:15:44.755 00:15:45.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.695 Nvme0n1 : 5.00 15313.60 59.82 0.00 0.00 0.00 0.00 0.00 00:15:45.695 =================================================================================================================== 00:15:45.695 Total : 15313.60 59.82 0.00 0.00 0.00 0.00 0.00 00:15:45.695 00:15:47.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:47.093 Nvme0n1 : 6.00 15380.83 60.08 0.00 0.00 0.00 0.00 0.00 00:15:47.093 =================================================================================================================== 00:15:47.093 Total : 15380.83 60.08 0.00 0.00 0.00 0.00 0.00 00:15:47.093 00:15:48.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:48.040 Nvme0n1 : 7.00 15379.29 60.08 0.00 0.00 0.00 0.00 0.00 00:15:48.040 =================================================================================================================== 00:15:48.040 Total : 15379.29 60.08 0.00 0.00 0.00 0.00 0.00 00:15:48.040 00:15:48.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:48.975 Nvme0n1 : 8.00 15365.88 60.02 0.00 0.00 0.00 0.00 0.00 00:15:48.975 =================================================================================================================== 00:15:48.975 Total : 15365.88 60.02 0.00 0.00 0.00 0.00 0.00 00:15:48.975 00:15:49.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:49.909 Nvme0n1 : 9.00 15381.56 60.08 0.00 0.00 0.00 0.00 0.00 00:15:49.909 =================================================================================================================== 00:15:49.909 Total : 15381.56 60.08 0.00 0.00 0.00 0.00 0.00 00:15:49.909 00:15:50.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:50.843 Nvme0n1 : 10.00 15380.50 60.08 0.00 0.00 0.00 0.00 0.00 00:15:50.843 =================================================================================================================== 00:15:50.843 Total : 15380.50 60.08 0.00 0.00 0.00 0.00 0.00 00:15:50.843 00:15:50.843 00:15:50.843 Latency(us) 00:15:50.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:50.843 Nvme0n1 : 10.00 15379.59 60.08 0.00 0.00 8317.51 2269.49 16796.63 00:15:50.843 =================================================================================================================== 00:15:50.843 Total : 15379.59 60.08 0.00 0.00 8317.51 2269.49 16796.63 00:15:50.843 0 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1879400 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1879400 ']' 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1879400 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1879400 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1879400' 00:15:50.843 killing process with pid 1879400 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1879400 00:15:50.843 Received shutdown signal, test time was about 10.000000 seconds 00:15:50.843 00:15:50.843 Latency(us) 00:15:50.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.843 =================================================================================================================== 00:15:50.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:50.843 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1879400 00:15:51.102 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:51.359 09:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:51.617 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:51.617 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:51.875 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:51.875 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:51.875 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:52.133 [2024-07-15 09:49:08.732642] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:52.133 09:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:52.391 request: 00:15:52.391 { 00:15:52.391 "uuid": "6a42b902-fc54-403e-be8b-da26a7eee5b3", 00:15:52.391 "method": "bdev_lvol_get_lvstores", 00:15:52.391 "req_id": 1 00:15:52.391 } 00:15:52.391 Got JSON-RPC error response 00:15:52.391 response: 00:15:52.391 { 00:15:52.391 "code": -19, 00:15:52.391 "message": "No such device" 00:15:52.391 } 00:15:52.391 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:52.391 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:52.391 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:52.391 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:52.391 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:52.650 aio_bdev 00:15:52.650 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c6c7166a-94b5-4d12-bc23-973257f1ff93 00:15:52.650 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=c6c7166a-94b5-4d12-bc23-973257f1ff93 00:15:52.650 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:52.650 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:52.650 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:52.650 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:52.650 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:52.908 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6c7166a-94b5-4d12-bc23-973257f1ff93 -t 2000 00:15:53.166 [ 00:15:53.166 { 00:15:53.166 "name": "c6c7166a-94b5-4d12-bc23-973257f1ff93", 00:15:53.166 "aliases": [ 00:15:53.166 "lvs/lvol" 00:15:53.166 ], 00:15:53.166 "product_name": "Logical Volume", 00:15:53.166 "block_size": 4096, 00:15:53.166 "num_blocks": 38912, 00:15:53.166 "uuid": "c6c7166a-94b5-4d12-bc23-973257f1ff93", 00:15:53.166 "assigned_rate_limits": { 00:15:53.166 "rw_ios_per_sec": 0, 00:15:53.166 "rw_mbytes_per_sec": 0, 00:15:53.166 "r_mbytes_per_sec": 0, 00:15:53.166 "w_mbytes_per_sec": 0 00:15:53.166 }, 00:15:53.166 "claimed": false, 00:15:53.166 "zoned": false, 00:15:53.166 "supported_io_types": { 00:15:53.166 "read": true, 00:15:53.166 "write": true, 00:15:53.166 "unmap": true, 00:15:53.166 "flush": false, 00:15:53.166 "reset": true, 00:15:53.166 "nvme_admin": false, 00:15:53.166 "nvme_io": false, 00:15:53.166 "nvme_io_md": false, 00:15:53.166 "write_zeroes": true, 00:15:53.166 "zcopy": false, 00:15:53.166 "get_zone_info": false, 00:15:53.166 "zone_management": false, 00:15:53.166 "zone_append": false, 00:15:53.166 "compare": false, 00:15:53.166 "compare_and_write": false, 00:15:53.166 "abort": false, 00:15:53.166 "seek_hole": true, 00:15:53.166 "seek_data": true, 00:15:53.166 "copy": false, 00:15:53.166 "nvme_iov_md": false 00:15:53.166 }, 00:15:53.166 "driver_specific": { 00:15:53.166 "lvol": { 00:15:53.166 "lvol_store_uuid": "6a42b902-fc54-403e-be8b-da26a7eee5b3", 00:15:53.166 "base_bdev": "aio_bdev", 00:15:53.166 "thin_provision": false, 00:15:53.166 "num_allocated_clusters": 38, 00:15:53.166 "snapshot": false, 00:15:53.166 "clone": false, 00:15:53.166 "esnap_clone": false 00:15:53.166 } 00:15:53.166 } 00:15:53.166 } 00:15:53.166 ] 00:15:53.166 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:53.166 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:53.166 09:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:53.424 09:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:53.425 09:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:53.425 09:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:53.683 09:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:53.683 09:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6c7166a-94b5-4d12-bc23-973257f1ff93 00:15:53.941 09:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a42b902-fc54-403e-be8b-da26a7eee5b3 00:15:54.199 09:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:54.458 00:15:54.458 real 0m17.347s 00:15:54.458 user 0m16.772s 00:15:54.458 sys 0m1.895s 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:54.458 ************************************ 00:15:54.458 END TEST lvs_grow_clean 00:15:54.458 ************************************ 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:54.458 ************************************ 00:15:54.458 START TEST lvs_grow_dirty 00:15:54.458 ************************************ 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:54.458 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:55.025 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:55.025 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:55.025 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7187c063-6891-440e-9441-8ff133287470 00:15:55.025 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:15:55.025 09:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:55.591 09:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:55.591 09:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:55.591 09:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7187c063-6891-440e-9441-8ff133287470 lvol 150 00:15:55.591 09:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8b81b1ac-dd42-4553-8527-9d58c21d8e86 00:15:55.591 09:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:55.591 09:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:56.156 [2024-07-15 09:49:12.636442] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:56.156 [2024-07-15 09:49:12.636529] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:56.156 true 00:15:56.156 09:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:15:56.156 09:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:56.156 09:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:56.156 09:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:56.441 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b81b1ac-dd42-4553-8527-9d58c21d8e86 00:15:56.698 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:56.955 [2024-07-15 09:49:13.655578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.955 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:57.212 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1881463 00:15:57.212 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:57.212 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1881463 /var/tmp/bdevperf.sock 00:15:57.212 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:57.212 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1881463 ']' 00:15:57.212 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:57.212 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.212 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:57.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:57.212 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.212 09:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:57.212 [2024-07-15 09:49:13.993623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:57.212 [2024-07-15 09:49:13.993716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881463 ] 00:15:57.469 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.470 [2024-07-15 09:49:14.026392] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:57.470 [2024-07-15 09:49:14.054675] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.470 [2024-07-15 09:49:14.140355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.470 09:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.470 09:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:57.470 09:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:58.035 Nvme0n1 00:15:58.035 09:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:58.293 [ 00:15:58.293 { 00:15:58.293 "name": "Nvme0n1", 00:15:58.293 "aliases": [ 00:15:58.293 "8b81b1ac-dd42-4553-8527-9d58c21d8e86" 00:15:58.293 ], 00:15:58.293 "product_name": "NVMe disk", 00:15:58.293 "block_size": 4096, 00:15:58.293 "num_blocks": 38912, 00:15:58.293 "uuid": "8b81b1ac-dd42-4553-8527-9d58c21d8e86", 00:15:58.293 "assigned_rate_limits": { 00:15:58.293 "rw_ios_per_sec": 0, 00:15:58.293 "rw_mbytes_per_sec": 0, 00:15:58.293 "r_mbytes_per_sec": 0, 00:15:58.293 "w_mbytes_per_sec": 0 00:15:58.293 }, 00:15:58.293 "claimed": false, 00:15:58.293 "zoned": false, 00:15:58.293 "supported_io_types": { 00:15:58.293 "read": true, 00:15:58.293 "write": true, 00:15:58.293 "unmap": true, 00:15:58.293 "flush": true, 00:15:58.293 "reset": true, 00:15:58.293 "nvme_admin": true, 00:15:58.293 "nvme_io": true, 00:15:58.293 "nvme_io_md": false, 00:15:58.293 "write_zeroes": true, 00:15:58.293 "zcopy": false, 00:15:58.293 "get_zone_info": false, 00:15:58.293 "zone_management": false, 00:15:58.293 "zone_append": false, 00:15:58.293 "compare": true, 00:15:58.293 "compare_and_write": true, 00:15:58.293 "abort": true, 00:15:58.293 "seek_hole": false, 00:15:58.293 "seek_data": false, 00:15:58.293 "copy": true, 00:15:58.293 "nvme_iov_md": false 00:15:58.293 }, 00:15:58.293 "memory_domains": [ 00:15:58.293 { 00:15:58.293 "dma_device_id": "system", 00:15:58.293 "dma_device_type": 1 00:15:58.293 } 00:15:58.293 ], 00:15:58.293 "driver_specific": { 00:15:58.293 "nvme": [ 00:15:58.293 { 00:15:58.293 "trid": { 00:15:58.293 "trtype": "TCP", 00:15:58.293 "adrfam": "IPv4", 00:15:58.293 "traddr": "10.0.0.2", 00:15:58.293 "trsvcid": "4420", 00:15:58.293 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:58.293 }, 00:15:58.293 "ctrlr_data": { 00:15:58.293 "cntlid": 1, 00:15:58.293 "vendor_id": "0x8086", 00:15:58.293 "model_number": "SPDK bdev Controller", 00:15:58.293 "serial_number": "SPDK0", 00:15:58.293 "firmware_revision": "24.09", 00:15:58.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:58.293 "oacs": { 00:15:58.293 "security": 0, 00:15:58.293 "format": 0, 00:15:58.293 "firmware": 0, 00:15:58.293 "ns_manage": 0 00:15:58.293 }, 00:15:58.293 "multi_ctrlr": true, 00:15:58.293 "ana_reporting": false 00:15:58.293 }, 00:15:58.293 "vs": { 00:15:58.293 "nvme_version": "1.3" 00:15:58.293 }, 00:15:58.293 "ns_data": { 00:15:58.293 "id": 1, 00:15:58.293 "can_share": true 00:15:58.293 } 00:15:58.293 } 00:15:58.293 ], 00:15:58.293 "mp_policy": "active_passive" 00:15:58.293 } 00:15:58.293 } 00:15:58.293 ] 00:15:58.293 09:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1881598 00:15:58.293 09:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:58.293 09:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:58.293 Running I/O for 10 seconds... 00:15:59.667 Latency(us) 00:15:59.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.667 Nvme0n1 : 1.00 14293.00 55.83 0.00 0.00 0.00 0.00 0.00 00:15:59.667 =================================================================================================================== 00:15:59.667 Total : 14293.00 55.83 0.00 0.00 0.00 0.00 0.00 00:15:59.667 00:16:00.233 09:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7187c063-6891-440e-9441-8ff133287470 00:16:00.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:00.491 Nvme0n1 : 2.00 14513.50 56.69 0.00 0.00 0.00 0.00 0.00 00:16:00.491 =================================================================================================================== 00:16:00.491 Total : 14513.50 56.69 0.00 0.00 0.00 0.00 0.00 00:16:00.491 00:16:00.491 true 00:16:00.491 09:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:16:00.491 09:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:00.752 09:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:00.753 09:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:00.753 09:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1881598 00:16:01.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.347 Nvme0n1 : 3.00 14799.00 57.81 0.00 0.00 0.00 0.00 0.00 00:16:01.347 =================================================================================================================== 00:16:01.347 Total : 14799.00 57.81 0.00 0.00 0.00 0.00 0.00 00:16:01.347 00:16:02.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:02.280 Nvme0n1 : 4.00 14816.25 57.88 0.00 0.00 0.00 0.00 0.00 00:16:02.280 =================================================================================================================== 00:16:02.280 Total : 14816.25 57.88 0.00 0.00 0.00 0.00 0.00 00:16:02.280 00:16:03.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:03.653 Nvme0n1 : 5.00 14851.60 58.01 0.00 0.00 0.00 0.00 0.00 00:16:03.653 =================================================================================================================== 00:16:03.653 Total : 14851.60 58.01 0.00 0.00 0.00 0.00 0.00 00:16:03.653 00:16:04.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.586 Nvme0n1 : 6.00 14970.00 58.48 0.00 0.00 0.00 0.00 0.00 00:16:04.586 =================================================================================================================== 00:16:04.586 Total : 14970.00 58.48 0.00 0.00 0.00 0.00 0.00 00:16:04.586 00:16:05.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.518 Nvme0n1 : 7.00 14990.86 58.56 0.00 0.00 0.00 0.00 0.00 00:16:05.518 =================================================================================================================== 00:16:05.518 Total : 14990.86 58.56 0.00 0.00 0.00 0.00 0.00 00:16:05.518 00:16:06.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.450 Nvme0n1 : 8.00 15070.38 58.87 0.00 0.00 0.00 0.00 0.00 00:16:06.450 =================================================================================================================== 00:16:06.450 Total : 15070.38 58.87 0.00 0.00 0.00 0.00 0.00 00:16:06.450 00:16:07.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.388 Nvme0n1 : 9.00 15124.67 59.08 0.00 0.00 0.00 0.00 0.00 00:16:07.388 =================================================================================================================== 00:16:07.388 Total : 15124.67 59.08 0.00 0.00 0.00 0.00 0.00 00:16:07.388 00:16:08.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.320 Nvme0n1 : 10.00 15136.90 59.13 0.00 0.00 0.00 0.00 0.00 00:16:08.320 =================================================================================================================== 00:16:08.320 Total : 15136.90 59.13 0.00 0.00 0.00 0.00 0.00 00:16:08.320 00:16:08.320 00:16:08.320 Latency(us) 00:16:08.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.320 Nvme0n1 : 10.01 15140.33 59.14 0.00 0.00 8449.19 5024.43 19903.53 00:16:08.320 =================================================================================================================== 00:16:08.320 Total : 15140.33 59.14 0.00 0.00 8449.19 5024.43 19903.53 00:16:08.320 0 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1881463 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1881463 ']' 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1881463 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1881463 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1881463' 00:16:08.320 killing process with pid 1881463 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1881463 00:16:08.320 Received shutdown signal, test time was about 10.000000 seconds 00:16:08.320 00:16:08.320 Latency(us) 00:16:08.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.320 =================================================================================================================== 00:16:08.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:08.320 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1881463 00:16:08.577 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:09.174 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:09.174 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:16:09.174 09:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1878958 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1878958 00:16:09.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1878958 Killed "${NVMF_APP[@]}" "$@" 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1882924 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1882924 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1882924 ']' 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.432 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:09.689 [2024-07-15 09:49:26.248080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:09.690 [2024-07-15 09:49:26.248158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.690 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.690 [2024-07-15 09:49:26.286794] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:09.690 [2024-07-15 09:49:26.318502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.690 [2024-07-15 09:49:26.406800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.690 [2024-07-15 09:49:26.406865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.690 [2024-07-15 09:49:26.406891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.690 [2024-07-15 09:49:26.406906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.690 [2024-07-15 09:49:26.406918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.690 [2024-07-15 09:49:26.406947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.948 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.948 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:09.948 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.948 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.948 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:09.948 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.948 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:10.205 [2024-07-15 09:49:26.830092] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:10.205 [2024-07-15 09:49:26.830239] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:10.205 [2024-07-15 09:49:26.830295] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:10.205 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:10.205 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8b81b1ac-dd42-4553-8527-9d58c21d8e86 00:16:10.205 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8b81b1ac-dd42-4553-8527-9d58c21d8e86 00:16:10.205 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:10.205 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:10.205 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:10.205 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:10.205 09:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:10.462 09:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8b81b1ac-dd42-4553-8527-9d58c21d8e86 -t 2000 00:16:10.720 [ 00:16:10.720 { 00:16:10.720 "name": "8b81b1ac-dd42-4553-8527-9d58c21d8e86", 00:16:10.720 "aliases": [ 00:16:10.720 "lvs/lvol" 00:16:10.720 ], 00:16:10.720 "product_name": "Logical Volume", 00:16:10.720 "block_size": 4096, 00:16:10.720 "num_blocks": 38912, 00:16:10.720 "uuid": "8b81b1ac-dd42-4553-8527-9d58c21d8e86", 00:16:10.720 "assigned_rate_limits": { 00:16:10.720 "rw_ios_per_sec": 0, 00:16:10.720 "rw_mbytes_per_sec": 0, 00:16:10.720 "r_mbytes_per_sec": 0, 00:16:10.720 "w_mbytes_per_sec": 0 00:16:10.720 }, 00:16:10.720 "claimed": false, 00:16:10.720 "zoned": false, 00:16:10.720 "supported_io_types": { 00:16:10.720 "read": true, 00:16:10.720 "write": true, 00:16:10.720 "unmap": true, 00:16:10.720 "flush": false, 00:16:10.720 "reset": true, 00:16:10.720 "nvme_admin": false, 00:16:10.720 "nvme_io": false, 00:16:10.720 "nvme_io_md": false, 00:16:10.720 "write_zeroes": true, 00:16:10.720 "zcopy": false, 00:16:10.720 "get_zone_info": false, 00:16:10.720 "zone_management": false, 00:16:10.720 "zone_append": false, 00:16:10.720 "compare": false, 00:16:10.720 "compare_and_write": false, 00:16:10.720 "abort": false, 00:16:10.720 "seek_hole": true, 00:16:10.720 "seek_data": true, 00:16:10.720 "copy": false, 00:16:10.720 "nvme_iov_md": false 00:16:10.720 }, 00:16:10.720 "driver_specific": { 00:16:10.720 "lvol": { 00:16:10.720 "lvol_store_uuid": "7187c063-6891-440e-9441-8ff133287470", 00:16:10.720 "base_bdev": "aio_bdev", 00:16:10.720 "thin_provision": false, 00:16:10.720 "num_allocated_clusters": 38, 00:16:10.720 "snapshot": false, 00:16:10.720 "clone": false, 00:16:10.720 "esnap_clone": false 00:16:10.720 } 00:16:10.720 } 00:16:10.720 } 00:16:10.720 ] 00:16:10.720 09:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:10.720 09:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:16:10.720 09:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:10.978 09:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:10.978 09:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:16:10.978 09:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:11.236 09:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:11.236 09:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:11.494 [2024-07-15 09:49:28.111135] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:11.494 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:16:11.494 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:11.495 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:16:11.495 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:11.495 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:11.495 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:11.495 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:11.495 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:11.495 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:11.495 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:11.495 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:11.495 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:16:11.752 request: 00:16:11.752 { 00:16:11.752 "uuid": "7187c063-6891-440e-9441-8ff133287470", 00:16:11.752 "method": "bdev_lvol_get_lvstores", 00:16:11.752 "req_id": 1 00:16:11.752 } 00:16:11.752 Got JSON-RPC error response 00:16:11.752 response: 00:16:11.752 { 00:16:11.752 "code": -19, 00:16:11.752 "message": "No such device" 00:16:11.752 } 00:16:11.752 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:11.752 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:11.752 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:11.752 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:11.752 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:12.010 aio_bdev 00:16:12.010 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8b81b1ac-dd42-4553-8527-9d58c21d8e86 00:16:12.010 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8b81b1ac-dd42-4553-8527-9d58c21d8e86 00:16:12.010 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:12.010 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:12.010 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:12.010 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:12.010 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:12.268 09:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8b81b1ac-dd42-4553-8527-9d58c21d8e86 -t 2000 00:16:12.527 [ 00:16:12.527 { 00:16:12.527 "name": "8b81b1ac-dd42-4553-8527-9d58c21d8e86", 00:16:12.527 "aliases": [ 00:16:12.527 "lvs/lvol" 00:16:12.527 ], 00:16:12.527 "product_name": "Logical Volume", 00:16:12.527 "block_size": 4096, 00:16:12.527 "num_blocks": 38912, 00:16:12.527 "uuid": "8b81b1ac-dd42-4553-8527-9d58c21d8e86", 00:16:12.527 "assigned_rate_limits": { 00:16:12.527 "rw_ios_per_sec": 0, 00:16:12.527 "rw_mbytes_per_sec": 0, 00:16:12.527 "r_mbytes_per_sec": 0, 00:16:12.527 "w_mbytes_per_sec": 0 00:16:12.527 }, 00:16:12.527 "claimed": false, 00:16:12.527 "zoned": false, 00:16:12.527 "supported_io_types": { 00:16:12.527 "read": true, 00:16:12.527 "write": true, 00:16:12.527 "unmap": true, 00:16:12.527 "flush": false, 00:16:12.527 "reset": true, 00:16:12.527 "nvme_admin": false, 00:16:12.527 "nvme_io": false, 00:16:12.527 "nvme_io_md": false, 00:16:12.527 "write_zeroes": true, 00:16:12.527 "zcopy": false, 00:16:12.527 "get_zone_info": false, 00:16:12.527 "zone_management": false, 00:16:12.527 "zone_append": false, 00:16:12.527 "compare": false, 00:16:12.527 "compare_and_write": false, 00:16:12.527 "abort": false, 00:16:12.527 "seek_hole": true, 00:16:12.527 "seek_data": true, 00:16:12.527 "copy": false, 00:16:12.527 "nvme_iov_md": false 00:16:12.527 }, 00:16:12.527 "driver_specific": { 00:16:12.527 "lvol": { 00:16:12.527 "lvol_store_uuid": "7187c063-6891-440e-9441-8ff133287470", 00:16:12.527 "base_bdev": "aio_bdev", 00:16:12.527 "thin_provision": false, 00:16:12.527 "num_allocated_clusters": 38, 00:16:12.527 "snapshot": false, 00:16:12.527 "clone": false, 00:16:12.527 "esnap_clone": false 00:16:12.527 } 00:16:12.527 } 00:16:12.527 } 00:16:12.527 ] 00:16:12.527 09:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:12.527 09:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:16:12.527 09:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:12.785 09:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:12.786 09:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7187c063-6891-440e-9441-8ff133287470 00:16:12.786 09:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:13.043 09:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:13.043 09:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8b81b1ac-dd42-4553-8527-9d58c21d8e86 00:16:13.301 09:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7187c063-6891-440e-9441-8ff133287470 00:16:13.867 09:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:13.867 09:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:13.867 00:16:13.867 real 0m19.419s 00:16:13.867 user 0m47.908s 00:16:13.867 sys 0m5.000s 00:16:13.867 09:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.867 09:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:13.867 ************************************ 00:16:13.867 END TEST lvs_grow_dirty 00:16:13.867 ************************************ 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:14.125 nvmf_trace.0 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.125 rmmod nvme_tcp 00:16:14.125 rmmod nvme_fabrics 00:16:14.125 rmmod nvme_keyring 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1882924 ']' 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1882924 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1882924 ']' 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1882924 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1882924 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1882924' 00:16:14.125 killing process with pid 1882924 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1882924 00:16:14.125 09:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1882924 00:16:14.385 09:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.385 09:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.385 09:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.385 09:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.385 09:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.385 09:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.385 09:49:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.385 09:49:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.311 09:49:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:16.311 00:16:16.311 real 0m42.039s 00:16:16.311 user 1m10.523s 00:16:16.311 sys 0m8.762s 00:16:16.311 09:49:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:16.311 09:49:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:16.311 ************************************ 00:16:16.311 END TEST nvmf_lvs_grow 00:16:16.311 ************************************ 00:16:16.569 09:49:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:16.569 09:49:33 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:16.569 09:49:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:16.569 09:49:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.569 09:49:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:16.569 ************************************ 00:16:16.569 START TEST nvmf_bdev_io_wait 00:16:16.569 ************************************ 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:16.569 * Looking for test storage... 00:16:16.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.569 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:16.570 09:49:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:18.470 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:18.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:18.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:18.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:18.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:16:18.471 00:16:18.471 --- 10.0.0.2 ping statistics --- 00:16:18.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.471 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:16:18.471 00:16:18.471 --- 10.0.0.1 ping statistics --- 00:16:18.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.471 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1885445 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1885445 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1885445 ']' 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.471 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.730 [2024-07-15 09:49:35.267451] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:18.730 [2024-07-15 09:49:35.267541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.730 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.730 [2024-07-15 09:49:35.306947] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:18.730 [2024-07-15 09:49:35.338489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.730 [2024-07-15 09:49:35.435836] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.730 [2024-07-15 09:49:35.435924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.730 [2024-07-15 09:49:35.435942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.730 [2024-07-15 09:49:35.435956] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.730 [2024-07-15 09:49:35.435967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.730 [2024-07-15 09:49:35.436027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.730 [2024-07-15 09:49:35.436082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.730 [2024-07-15 09:49:35.436206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.730 [2024-07-15 09:49:35.436208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.730 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.730 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:16:18.730 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.730 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.730 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.989 [2024-07-15 09:49:35.625351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.989 Malloc0 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.989 [2024-07-15 09:49:35.690551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1885471 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1885473 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1885475 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.989 { 00:16:18.989 "params": { 00:16:18.989 "name": "Nvme$subsystem", 00:16:18.989 "trtype": "$TEST_TRANSPORT", 00:16:18.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.989 "adrfam": "ipv4", 00:16:18.989 "trsvcid": "$NVMF_PORT", 00:16:18.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.989 "hdgst": ${hdgst:-false}, 00:16:18.989 "ddgst": ${ddgst:-false} 00:16:18.989 }, 00:16:18.989 "method": "bdev_nvme_attach_controller" 00:16:18.989 } 00:16:18.989 EOF 00:16:18.989 )") 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.989 { 00:16:18.989 "params": { 00:16:18.989 "name": "Nvme$subsystem", 00:16:18.989 "trtype": "$TEST_TRANSPORT", 00:16:18.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.989 "adrfam": "ipv4", 00:16:18.989 "trsvcid": "$NVMF_PORT", 00:16:18.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.989 "hdgst": ${hdgst:-false}, 00:16:18.989 "ddgst": ${ddgst:-false} 00:16:18.989 }, 00:16:18.989 "method": "bdev_nvme_attach_controller" 00:16:18.989 } 00:16:18.989 EOF 00:16:18.989 )") 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1885477 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:18.989 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.989 { 00:16:18.989 "params": { 00:16:18.989 "name": "Nvme$subsystem", 00:16:18.989 "trtype": "$TEST_TRANSPORT", 00:16:18.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.990 "adrfam": "ipv4", 00:16:18.990 "trsvcid": "$NVMF_PORT", 00:16:18.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.990 "hdgst": ${hdgst:-false}, 00:16:18.990 "ddgst": ${ddgst:-false} 00:16:18.990 }, 00:16:18.990 "method": "bdev_nvme_attach_controller" 00:16:18.990 } 00:16:18.990 EOF 00:16:18.990 )") 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.990 { 00:16:18.990 "params": { 00:16:18.990 "name": "Nvme$subsystem", 00:16:18.990 "trtype": "$TEST_TRANSPORT", 00:16:18.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.990 "adrfam": "ipv4", 00:16:18.990 "trsvcid": "$NVMF_PORT", 00:16:18.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.990 "hdgst": ${hdgst:-false}, 00:16:18.990 "ddgst": ${ddgst:-false} 00:16:18.990 }, 00:16:18.990 "method": "bdev_nvme_attach_controller" 00:16:18.990 } 00:16:18.990 EOF 00:16:18.990 )") 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1885471 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.990 "params": { 00:16:18.990 "name": "Nvme1", 00:16:18.990 "trtype": "tcp", 00:16:18.990 "traddr": "10.0.0.2", 00:16:18.990 "adrfam": "ipv4", 00:16:18.990 "trsvcid": "4420", 00:16:18.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.990 "hdgst": false, 00:16:18.990 "ddgst": false 00:16:18.990 }, 00:16:18.990 "method": "bdev_nvme_attach_controller" 00:16:18.990 }' 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.990 "params": { 00:16:18.990 "name": "Nvme1", 00:16:18.990 "trtype": "tcp", 00:16:18.990 "traddr": "10.0.0.2", 00:16:18.990 "adrfam": "ipv4", 00:16:18.990 "trsvcid": "4420", 00:16:18.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.990 "hdgst": false, 00:16:18.990 "ddgst": false 00:16:18.990 }, 00:16:18.990 "method": "bdev_nvme_attach_controller" 00:16:18.990 }' 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.990 "params": { 00:16:18.990 "name": "Nvme1", 00:16:18.990 "trtype": "tcp", 00:16:18.990 "traddr": "10.0.0.2", 00:16:18.990 "adrfam": "ipv4", 00:16:18.990 "trsvcid": "4420", 00:16:18.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.990 "hdgst": false, 00:16:18.990 "ddgst": false 00:16:18.990 }, 00:16:18.990 "method": "bdev_nvme_attach_controller" 00:16:18.990 }' 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.990 09:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.990 "params": { 00:16:18.990 "name": "Nvme1", 00:16:18.990 "trtype": "tcp", 00:16:18.990 "traddr": "10.0.0.2", 00:16:18.990 "adrfam": "ipv4", 00:16:18.990 "trsvcid": "4420", 00:16:18.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.990 "hdgst": false, 00:16:18.990 "ddgst": false 00:16:18.990 }, 00:16:18.990 "method": "bdev_nvme_attach_controller" 00:16:18.990 }' 00:16:18.990 [2024-07-15 09:49:35.738082] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:18.990 [2024-07-15 09:49:35.738082] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:18.990 [2024-07-15 09:49:35.738082] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:18.990 [2024-07-15 09:49:35.738180] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 09:49:35.738179] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 09:49:35.738180] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:18.990 --proc-type=auto ] 00:16:18.990 --proc-type=auto ] 00:16:18.990 [2024-07-15 09:49:35.738686] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:18.990 [2024-07-15 09:49:35.738762] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:19.248 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.248 [2024-07-15 09:49:35.884272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:19.248 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.248 [2024-07-15 09:49:35.913718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.248 [2024-07-15 09:49:35.989178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:19.248 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.248 [2024-07-15 09:49:35.991527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:19.248 [2024-07-15 09:49:36.019515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.507 [2024-07-15 09:49:36.087325] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:19.507 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.507 [2024-07-15 09:49:36.095260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:19.507 [2024-07-15 09:49:36.117635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.507 [2024-07-15 09:49:36.161454] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:19.507 [2024-07-15 09:49:36.191282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.507 [2024-07-15 09:49:36.192203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:19.507 [2024-07-15 09:49:36.259263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:16:19.765 Running I/O for 1 seconds... 00:16:19.765 Running I/O for 1 seconds... 00:16:19.765 Running I/O for 1 seconds... 00:16:20.023 Running I/O for 1 seconds... 00:16:20.958 00:16:20.958 Latency(us) 00:16:20.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.958 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:20.958 Nvme1n1 : 1.00 200369.89 782.69 0.00 0.00 636.01 262.45 855.61 00:16:20.958 =================================================================================================================== 00:16:20.958 Total : 200369.89 782.69 0.00 0.00 636.01 262.45 855.61 00:16:20.958 00:16:20.958 Latency(us) 00:16:20.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.958 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:20.958 Nvme1n1 : 1.01 9715.83 37.95 0.00 0.00 13110.94 9077.95 22427.88 00:16:20.958 =================================================================================================================== 00:16:20.958 Total : 9715.83 37.95 0.00 0.00 13110.94 9077.95 22427.88 00:16:20.958 00:16:20.958 Latency(us) 00:16:20.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.958 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:20.958 Nvme1n1 : 1.01 7656.33 29.91 0.00 0.00 16630.51 7427.41 28738.75 00:16:20.958 =================================================================================================================== 00:16:20.958 Total : 7656.33 29.91 0.00 0.00 16630.51 7427.41 28738.75 00:16:20.958 00:16:20.958 Latency(us) 00:16:20.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.958 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:20.958 Nvme1n1 : 1.01 10152.97 39.66 0.00 0.00 12561.95 6310.87 23301.69 00:16:20.958 =================================================================================================================== 00:16:20.958 Total : 10152.97 39.66 0.00 0.00 12561.95 6310.87 23301.69 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1885473 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1885475 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1885477 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.216 rmmod nvme_tcp 00:16:21.216 rmmod nvme_fabrics 00:16:21.216 rmmod nvme_keyring 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1885445 ']' 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1885445 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1885445 ']' 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1885445 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1885445 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1885445' 00:16:21.216 killing process with pid 1885445 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1885445 00:16:21.216 09:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1885445 00:16:21.475 09:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.475 09:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.475 09:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.475 09:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.475 09:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.475 09:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.475 09:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.475 09:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.011 09:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:24.011 00:16:24.011 real 0m7.057s 00:16:24.011 user 0m15.413s 00:16:24.011 sys 0m3.853s 00:16:24.011 09:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.011 09:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:24.011 ************************************ 00:16:24.011 END TEST nvmf_bdev_io_wait 00:16:24.011 ************************************ 00:16:24.011 09:49:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:24.011 09:49:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:24.011 09:49:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:24.011 09:49:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.011 09:49:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.011 ************************************ 00:16:24.011 START TEST nvmf_queue_depth 00:16:24.011 ************************************ 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:24.011 * Looking for test storage... 00:16:24.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.011 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.012 09:49:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.012 09:49:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.012 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:24.012 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:24.012 09:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:24.012 09:49:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:25.388 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.388 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:25.389 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:25.389 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:25.389 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.389 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:25.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:16:25.648 00:16:25.648 --- 10.0.0.2 ping statistics --- 00:16:25.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.648 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:16:25.648 00:16:25.648 --- 10.0.0.1 ping statistics --- 00:16:25.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.648 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1887692 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1887692 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1887692 ']' 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.648 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:25.648 [2024-07-15 09:49:42.342244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:25.648 [2024-07-15 09:49:42.342338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.648 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.648 [2024-07-15 09:49:42.379845] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:25.648 [2024-07-15 09:49:42.411626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.907 [2024-07-15 09:49:42.501051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.907 [2024-07-15 09:49:42.501115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.907 [2024-07-15 09:49:42.501141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.907 [2024-07-15 09:49:42.501155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.907 [2024-07-15 09:49:42.501168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.907 [2024-07-15 09:49:42.501209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:25.907 [2024-07-15 09:49:42.649547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:25.907 Malloc0 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.907 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.165 [2024-07-15 09:49:42.706182] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1887768 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1887768 /var/tmp/bdevperf.sock 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1887768 ']' 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.165 09:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.165 [2024-07-15 09:49:42.754608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:26.166 [2024-07-15 09:49:42.754697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887768 ] 00:16:26.166 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.166 [2024-07-15 09:49:42.789438] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:26.166 [2024-07-15 09:49:42.817546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.166 [2024-07-15 09:49:42.902883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.423 09:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.423 09:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:26.423 09:49:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:26.423 09:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.423 09:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.681 NVMe0n1 00:16:26.681 09:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.681 09:49:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:26.681 Running I/O for 10 seconds... 00:16:38.877 00:16:38.877 Latency(us) 00:16:38.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.877 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:38.877 Verification LBA range: start 0x0 length 0x4000 00:16:38.877 NVMe0n1 : 10.09 8574.55 33.49 0.00 0.00 118824.45 24855.13 74177.04 00:16:38.877 =================================================================================================================== 00:16:38.877 Total : 8574.55 33.49 0.00 0.00 118824.45 24855.13 74177.04 00:16:38.877 0 00:16:38.877 09:49:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1887768 00:16:38.877 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1887768 ']' 00:16:38.877 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1887768 00:16:38.877 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:38.877 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.877 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887768 00:16:38.877 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:38.877 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:38.877 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887768' 00:16:38.877 killing process with pid 1887768 00:16:38.877 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1887768 00:16:38.877 Received shutdown signal, test time was about 10.000000 seconds 00:16:38.877 00:16:38.877 Latency(us) 00:16:38.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.878 =================================================================================================================== 00:16:38.878 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1887768 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:38.878 rmmod nvme_tcp 00:16:38.878 rmmod nvme_fabrics 00:16:38.878 rmmod nvme_keyring 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1887692 ']' 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1887692 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1887692 ']' 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1887692 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887692 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887692' 00:16:38.878 killing process with pid 1887692 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1887692 00:16:38.878 09:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1887692 00:16:38.878 09:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:38.878 09:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:38.878 09:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:38.878 09:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.878 09:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:38.878 09:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.878 09:49:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.878 09:49:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.487 09:49:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:39.487 00:16:39.487 real 0m15.935s 00:16:39.487 user 0m22.660s 00:16:39.487 sys 0m2.901s 00:16:39.487 09:49:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:39.487 09:49:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:39.487 ************************************ 00:16:39.487 END TEST nvmf_queue_depth 00:16:39.487 ************************************ 00:16:39.487 09:49:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:39.487 09:49:56 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:39.487 09:49:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:39.487 09:49:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.487 09:49:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.487 ************************************ 00:16:39.487 START TEST nvmf_target_multipath 00:16:39.487 ************************************ 00:16:39.487 09:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:39.487 * Looking for test storage... 00:16:39.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.745 09:49:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:39.746 09:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:41.648 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:41.648 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:41.648 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.648 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:41.649 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:16:41.649 00:16:41.649 --- 10.0.0.2 ping statistics --- 00:16:41.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.649 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:16:41.649 00:16:41.649 --- 10.0.0.1 ping statistics --- 00:16:41.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.649 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:41.649 only one NIC for nvmf test 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.649 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.649 rmmod nvme_tcp 00:16:41.907 rmmod nvme_fabrics 00:16:41.907 rmmod nvme_keyring 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.907 09:49:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.809 00:16:43.809 real 0m4.313s 00:16:43.809 user 0m0.811s 00:16:43.809 sys 0m1.496s 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.809 09:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:43.809 ************************************ 00:16:43.809 END TEST nvmf_target_multipath 00:16:43.809 ************************************ 00:16:43.809 09:50:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:43.809 09:50:00 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:43.809 09:50:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:43.809 09:50:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.809 09:50:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.809 ************************************ 00:16:43.809 START TEST nvmf_zcopy 00:16:43.809 ************************************ 00:16:43.809 09:50:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:44.067 * Looking for test storage... 00:16:44.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.067 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:44.068 09:50:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:45.967 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:45.967 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:45.967 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:45.967 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:45.967 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:46.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:16:46.225 00:16:46.225 --- 10.0.0.2 ping statistics --- 00:16:46.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.225 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:16:46.225 00:16:46.225 --- 10.0.0.1 ping statistics --- 00:16:46.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.225 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1892883 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1892883 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1892883 ']' 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.225 09:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:46.225 [2024-07-15 09:50:02.877882] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:46.225 [2024-07-15 09:50:02.877963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.225 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.225 [2024-07-15 09:50:02.914821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:46.225 [2024-07-15 09:50:02.940611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.484 [2024-07-15 09:50:03.025075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.484 [2024-07-15 09:50:03.025123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.484 [2024-07-15 09:50:03.025144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.484 [2024-07-15 09:50:03.025161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.484 [2024-07-15 09:50:03.025175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.484 [2024-07-15 09:50:03.025214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:46.484 [2024-07-15 09:50:03.167807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.484 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:46.485 [2024-07-15 09:50:03.184007] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:46.485 malloc0 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:46.485 { 00:16:46.485 "params": { 00:16:46.485 "name": "Nvme$subsystem", 00:16:46.485 "trtype": "$TEST_TRANSPORT", 00:16:46.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:46.485 "adrfam": "ipv4", 00:16:46.485 "trsvcid": "$NVMF_PORT", 00:16:46.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:46.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:46.485 "hdgst": ${hdgst:-false}, 00:16:46.485 "ddgst": ${ddgst:-false} 00:16:46.485 }, 00:16:46.485 "method": "bdev_nvme_attach_controller" 00:16:46.485 } 00:16:46.485 EOF 00:16:46.485 )") 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:46.485 09:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:46.485 "params": { 00:16:46.485 "name": "Nvme1", 00:16:46.485 "trtype": "tcp", 00:16:46.485 "traddr": "10.0.0.2", 00:16:46.485 "adrfam": "ipv4", 00:16:46.485 "trsvcid": "4420", 00:16:46.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:46.485 "hdgst": false, 00:16:46.485 "ddgst": false 00:16:46.485 }, 00:16:46.485 "method": "bdev_nvme_attach_controller" 00:16:46.485 }' 00:16:46.485 [2024-07-15 09:50:03.265606] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:46.485 [2024-07-15 09:50:03.265678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892960 ] 00:16:46.742 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.742 [2024-07-15 09:50:03.298737] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:46.742 [2024-07-15 09:50:03.330913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.742 [2024-07-15 09:50:03.427292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.999 Running I/O for 10 seconds... 00:16:56.964 00:16:56.964 Latency(us) 00:16:56.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.964 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:56.964 Verification LBA range: start 0x0 length 0x1000 00:16:56.964 Nvme1n1 : 10.01 5769.47 45.07 0.00 0.00 22123.43 509.72 32622.36 00:16:56.964 =================================================================================================================== 00:16:56.964 Total : 5769.47 45.07 0.00 0.00 22123.43 509.72 32622.36 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1894208 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:57.222 { 00:16:57.222 "params": { 00:16:57.222 "name": "Nvme$subsystem", 00:16:57.222 "trtype": "$TEST_TRANSPORT", 00:16:57.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.222 "adrfam": "ipv4", 00:16:57.222 "trsvcid": "$NVMF_PORT", 00:16:57.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.222 "hdgst": ${hdgst:-false}, 00:16:57.222 "ddgst": ${ddgst:-false} 00:16:57.222 }, 00:16:57.222 "method": "bdev_nvme_attach_controller" 00:16:57.222 } 00:16:57.222 EOF 00:16:57.222 )") 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:57.222 [2024-07-15 09:50:13.891612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.891660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:57.222 09:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:57.222 "params": { 00:16:57.222 "name": "Nvme1", 00:16:57.222 "trtype": "tcp", 00:16:57.222 "traddr": "10.0.0.2", 00:16:57.222 "adrfam": "ipv4", 00:16:57.222 "trsvcid": "4420", 00:16:57.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.222 "hdgst": false, 00:16:57.222 "ddgst": false 00:16:57.222 }, 00:16:57.222 "method": "bdev_nvme_attach_controller" 00:16:57.222 }' 00:16:57.222 [2024-07-15 09:50:13.899581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.899611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.907593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.907619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.915604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.915626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.923625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.923646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.931272] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:57.222 [2024-07-15 09:50:13.931326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894208 ] 00:16:57.222 [2024-07-15 09:50:13.931665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.931692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.939671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.939695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.947690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.947712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.955709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.955731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.222 [2024-07-15 09:50:13.963729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.963751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.965841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:57.222 [2024-07-15 09:50:13.971774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.971802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.979795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.979830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.987816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.987843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.995835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:13.995862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.222 [2024-07-15 09:50:13.996370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.222 [2024-07-15 09:50:14.003899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.222 [2024-07-15 09:50:14.003950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.011921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.011972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.019927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.019951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.027943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.027966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.035963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.035986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.043977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.044000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.052015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.052047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.060033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.060062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.068039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.068062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.076061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.076083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.084082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.084104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.092106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.092128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.092142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.480 [2024-07-15 09:50:14.100129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.100153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.108187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.108221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.116207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.116257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.124240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.124290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.132269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.132306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.140293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.140333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.148307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.148346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.156323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.156363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.164316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.164344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.480 [2024-07-15 09:50:14.172363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.480 [2024-07-15 09:50:14.172398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.180387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.180425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.188407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.188443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.196410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.196437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.204437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.204465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.212462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.212492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.220483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.220511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.228505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.228535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.236534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.236563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.244550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.244578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.252575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.252602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.481 [2024-07-15 09:50:14.260598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.481 [2024-07-15 09:50:14.260625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.738 [2024-07-15 09:50:14.268622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.738 [2024-07-15 09:50:14.268648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.738 [2024-07-15 09:50:14.276651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.738 [2024-07-15 09:50:14.276692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.738 [2024-07-15 09:50:14.284673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.284701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.292691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.292718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.300720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.300750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.308740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.308768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.316761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.316789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.324786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.324814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.332808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.332837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.340829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.340856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.348853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.348888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.356874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.356925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.364906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.364950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.372946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.372971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.380965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.380988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.388982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.389015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.397017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.397040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.405036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.405059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.413048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.413073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.421068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.421091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.429098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.429130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 Running I/O for 5 seconds... 00:16:57.739 [2024-07-15 09:50:14.437114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.437138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.449449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.449481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.459270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.459303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.471800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.471831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.482543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.482570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.493485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.493513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.504065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.504093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.739 [2024-07-15 09:50:14.515056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.739 [2024-07-15 09:50:14.515084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.997 [2024-07-15 09:50:14.525805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.997 [2024-07-15 09:50:14.525832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.997 [2024-07-15 09:50:14.538402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.997 [2024-07-15 09:50:14.538430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.997 [2024-07-15 09:50:14.548490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.997 [2024-07-15 09:50:14.548518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.997 [2024-07-15 09:50:14.559471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.997 [2024-07-15 09:50:14.559499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.997 [2024-07-15 09:50:14.571683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.997 [2024-07-15 09:50:14.571710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.997 [2024-07-15 09:50:14.581924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.997 [2024-07-15 09:50:14.581952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.997 [2024-07-15 09:50:14.592169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.997 [2024-07-15 09:50:14.592197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.997 [2024-07-15 09:50:14.602672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.997 [2024-07-15 09:50:14.602702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.997 [2024-07-15 09:50:14.613698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.997 [2024-07-15 09:50:14.613727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.997 [2024-07-15 09:50:14.624566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.624594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.635424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.635452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.646007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.646035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.658527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.658555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.668703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.668731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.679048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.679076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.689512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.689540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.699688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.699717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.710182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.710210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.720578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.720605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.731220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.731248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.741451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.741478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.751834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.751861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.762128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.762156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.998 [2024-07-15 09:50:14.772577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.998 [2024-07-15 09:50:14.772605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.256 [2024-07-15 09:50:14.783140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.256 [2024-07-15 09:50:14.783168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.256 [2024-07-15 09:50:14.793562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.256 [2024-07-15 09:50:14.793590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.256 [2024-07-15 09:50:14.804052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.256 [2024-07-15 09:50:14.804080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.256 [2024-07-15 09:50:14.814623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.256 [2024-07-15 09:50:14.814651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.256 [2024-07-15 09:50:14.825096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.256 [2024-07-15 09:50:14.825123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.256 [2024-07-15 09:50:14.835633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.256 [2024-07-15 09:50:14.835661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.256 [2024-07-15 09:50:14.845721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.845748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.856026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.856053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.866805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.866833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.876757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.876784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.887610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.887638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.899841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.899869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.909746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.909773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.920935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.920962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.932945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.932972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.942522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.942550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.953348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.953375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.963971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.963999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.976345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.976373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.985732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.985760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:14.996088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:14.996115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:15.007118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:15.007146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:15.017675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:15.017702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:15.028095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:15.028123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.257 [2024-07-15 09:50:15.040408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.257 [2024-07-15 09:50:15.040436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.050010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.050038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.060527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.060555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.073044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.073072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.083090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.083117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.093617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.093645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.103795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.103822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.114220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.114247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.124860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.124895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.135514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.135545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.148818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.148849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.159566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.159596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.170986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.171014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.182925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.182953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.196416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.196447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.207749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.207779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.219080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.219108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.230757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.230788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.241835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.241889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.253524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.253555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.264745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.264775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.276369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.276400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.287738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.287769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.516 [2024-07-15 09:50:15.298855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.516 [2024-07-15 09:50:15.298894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.310240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.310271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.321819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.321849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.333595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.333625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.344824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.344854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.356411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.356442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.368056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.368084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.379649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.379681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.390965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.390993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.402327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.402357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.413580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.413611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.425326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.775 [2024-07-15 09:50:15.425357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.775 [2024-07-15 09:50:15.436727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.436758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.776 [2024-07-15 09:50:15.447838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.447868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.776 [2024-07-15 09:50:15.459372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.459413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.776 [2024-07-15 09:50:15.470903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.470946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.776 [2024-07-15 09:50:15.482510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.482541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.776 [2024-07-15 09:50:15.493793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.493824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.776 [2024-07-15 09:50:15.505120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.505148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.776 [2024-07-15 09:50:15.516526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.516556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.776 [2024-07-15 09:50:15.527872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.527936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.776 [2024-07-15 09:50:15.539537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.539567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.776 [2024-07-15 09:50:15.550795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.776 [2024-07-15 09:50:15.550826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.564532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.564563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.574763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.574794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.586275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.586307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.597755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.597787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.609056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.609084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.620700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.620730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.632166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.632211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.643965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.643993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.655722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.655753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.667237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.667268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.679439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.679480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.690738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.690769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.702159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.702204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.713487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.713518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.725010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.725039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.736655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.736686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.748195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.748227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.759967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.759995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.771964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.771992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.782822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.782853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.793866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.793925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.805614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.805645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.034 [2024-07-15 09:50:15.816537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.034 [2024-07-15 09:50:15.816569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.827807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.827839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.839046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.839073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.850364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.850396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.862216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.862247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.874361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.874392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.886056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.886084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.899713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.899755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.910526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.910557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.921920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.921947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.933230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.933261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.945071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.945100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.956783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.956814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.970230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.970261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.980232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.980263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:15.992111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:15.992138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:16.003706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:16.003736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:16.014978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:16.015006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:16.026163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:16.026207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:16.039379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:16.039411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:16.049657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:16.049688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:16.061476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:16.061506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.293 [2024-07-15 09:50:16.072508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.293 [2024-07-15 09:50:16.072538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.083937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.083964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.095273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.095304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.106604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.106634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.118003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.118030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.129239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.129270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.140549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.140579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.154198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.154245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.165185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.165215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.176501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.176531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.187780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.187810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.199542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.199573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.210813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.210843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.222336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.222367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.233721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.233752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.245118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.245146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.256957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.256985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.268330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.268361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.282097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.282125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.293207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.293237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.304927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.304954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.318240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.318271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.552 [2024-07-15 09:50:16.329119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.552 [2024-07-15 09:50:16.329146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.810 [2024-07-15 09:50:16.340477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.810 [2024-07-15 09:50:16.340508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.810 [2024-07-15 09:50:16.351610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.351641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.362885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.362916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.374312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.374343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.385439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.385470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.396952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.396981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.408128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.408155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.419442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.419472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.430809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.430839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.442143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.442171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.454141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.454170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.465778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.465809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.477249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.477280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.488744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.488775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.500279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.500310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.511805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.511836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.525141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.525186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.535417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.535448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.547149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.547193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.558655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.558686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.571498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.571528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.581999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.582027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.811 [2024-07-15 09:50:16.593746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.811 [2024-07-15 09:50:16.593776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.605059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.605087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.618435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.618465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.629090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.629128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.640460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.640491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.654137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.654173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.664811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.664841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.676581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.676612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.688089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.688117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.701220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.701252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.712375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.712406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.724273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.724303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.735743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.735773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.069 [2024-07-15 09:50:16.749454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.069 [2024-07-15 09:50:16.749484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.070 [2024-07-15 09:50:16.760678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.070 [2024-07-15 09:50:16.760710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.070 [2024-07-15 09:50:16.772087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.070 [2024-07-15 09:50:16.772114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.070 [2024-07-15 09:50:16.784119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.070 [2024-07-15 09:50:16.784147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.070 [2024-07-15 09:50:16.796217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.070 [2024-07-15 09:50:16.796248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.070 [2024-07-15 09:50:16.807463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.070 [2024-07-15 09:50:16.807493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.070 [2024-07-15 09:50:16.818715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.070 [2024-07-15 09:50:16.818745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.070 [2024-07-15 09:50:16.829790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.070 [2024-07-15 09:50:16.829820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.070 [2024-07-15 09:50:16.840771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.070 [2024-07-15 09:50:16.840801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.070 [2024-07-15 09:50:16.851954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.070 [2024-07-15 09:50:16.851982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.863255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.863286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.874567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.874598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.886264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.886296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.897421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.897464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.910192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.910223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.920779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.920810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.931868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.931925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.943378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.943410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.954817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.954848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.966465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.966497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.977924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.977953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:16.989290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:16.989331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.001243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.001275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.012832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.012868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.024261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.024291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.036032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.036060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.047257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.047288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.058995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.059023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.070736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.070766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.082535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.082565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.093856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.093895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.106895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.106923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.351 [2024-07-15 09:50:17.118246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.351 [2024-07-15 09:50:17.118277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.612 [2024-07-15 09:50:17.128064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.612 [2024-07-15 09:50:17.128091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.612 [2024-07-15 09:50:17.140648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.612 [2024-07-15 09:50:17.140678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.612 [2024-07-15 09:50:17.152182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.612 [2024-07-15 09:50:17.152226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.612 [2024-07-15 09:50:17.163943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.612 [2024-07-15 09:50:17.163971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.612 [2024-07-15 09:50:17.175629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.612 [2024-07-15 09:50:17.175660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.612 [2024-07-15 09:50:17.186827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.612 [2024-07-15 09:50:17.186857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.612 [2024-07-15 09:50:17.198076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.612 [2024-07-15 09:50:17.198103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.612 [2024-07-15 09:50:17.209492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.612 [2024-07-15 09:50:17.209534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.612 [2024-07-15 09:50:17.221039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.612 [2024-07-15 09:50:17.221066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.612 [2024-07-15 09:50:17.232289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.612 [2024-07-15 09:50:17.232320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.243786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.243818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.254745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.254775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.266140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.266184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.277688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.277718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.289224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.289255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.300497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.300527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.313341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.313371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.323573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.323603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.335219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.335250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.346682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.346713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.358082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.358110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.369523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.369554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.380895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.380942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.613 [2024-07-15 09:50:17.391970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.613 [2024-07-15 09:50:17.391999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.403391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.403423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.414847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.414887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.426355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.426399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.440044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.440071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.451094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.451122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.462930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.462957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.474402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.474433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.486077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.486105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.497497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.497528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.508548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.508579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.519369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.519396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.531044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.531072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.542801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.542832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.553784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.553814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.565032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.565059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.578433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.578464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.589297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.589327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.600821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.600851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.613827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.613857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.624127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.624154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.635209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.635240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.871 [2024-07-15 09:50:17.648352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.871 [2024-07-15 09:50:17.648393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.129 [2024-07-15 09:50:17.659491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.659522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.670689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.670720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.681890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.681920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.693222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.693252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.706476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.706506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.717281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.717327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.728922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.728950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.742527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.742558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.753511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.753543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.765282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.765312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.777127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.777154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.788692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.788724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.800604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.800634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.812225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.812253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.822530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.822558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.834942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.834970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.846366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.846393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.855685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.855713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.866727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.866755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.878814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.878842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.888487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.888514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.898375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.898403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.130 [2024-07-15 09:50:17.908539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.130 [2024-07-15 09:50:17.908567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:17.918859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:17.918897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:17.929494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:17.929521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:17.942283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:17.942312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:17.952069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:17.952097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:17.962992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:17.963019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:17.973143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:17.973170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:17.983622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:17.983650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:17.993940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:17.993967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.004698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.004726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.017518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.017546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.027362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.027390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.037712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.037741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.047563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.047592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.058214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.058242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.071349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.071377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.080864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.080900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.091508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.091536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.101779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.101807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.112246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.112274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.122596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.122630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.132895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.132931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.143446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.143474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.153375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.153403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.388 [2024-07-15 09:50:18.163695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.388 [2024-07-15 09:50:18.163722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.174144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.174171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.184832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.184860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.195444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.195472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.207406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.207433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.217261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.217288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.227790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.227818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.238035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.238062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.248425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.248452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.261284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.261312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.271292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.271320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.282147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.282174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.294352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.294380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.303965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.303993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.314725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.314753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.325534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.325561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.336165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.336192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.348404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.348431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.357540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.357568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.368725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.368753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.378929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.647 [2024-07-15 09:50:18.378957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.647 [2024-07-15 09:50:18.389378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.648 [2024-07-15 09:50:18.389405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.648 [2024-07-15 09:50:18.401399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.648 [2024-07-15 09:50:18.401427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.648 [2024-07-15 09:50:18.410555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.648 [2024-07-15 09:50:18.410585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.648 [2024-07-15 09:50:18.421078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.648 [2024-07-15 09:50:18.421106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.435027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.435059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.445520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.445551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.456927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.456955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.467845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.467889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.479403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.479434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.491091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.491119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.502656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.502687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.513897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.513946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.525206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.525237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.536557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.536587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.548004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.548031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.559274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.559305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.570844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.570884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.582535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.582565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.595758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.595789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.606193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.606223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.618247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.618278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.629545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.629576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.642743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.642774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.653740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.653771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.665786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.665816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:01.906 [2024-07-15 09:50:18.677086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:01.906 [2024-07-15 09:50:18.677113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.690574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.690615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.701168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.701196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.712084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.712111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.725338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.725369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.735799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.735830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.747591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.747622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.759208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.759239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.774896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.774944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.785507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.785538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.797030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.797058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.808781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.808811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.820315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.820346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.831729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.831759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.843552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.843584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.854927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.854955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.866595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.866625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.878327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.878358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.889807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.889838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.901233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.901263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.912718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.912760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.165 [2024-07-15 09:50:18.923656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.165 [2024-07-15 09:50:18.923688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.166 [2024-07-15 09:50:18.935198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.166 [2024-07-15 09:50:18.935229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.166 [2024-07-15 09:50:18.946503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.166 [2024-07-15 09:50:18.946534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:18.957802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:18.957834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:18.969390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:18.969421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:18.981027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:18.981054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:18.994447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:18.994478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.005707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.005738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.016543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.016575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.028038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.028067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.039084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.039112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.050780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.050811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.062084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.062112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.075514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.075545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.086168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.086196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.097935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.097963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.108951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.108979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.120014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.120042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.131774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.131816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.143126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.143168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.154378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.154409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.165792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.165822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.177128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.177158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.188281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.424 [2024-07-15 09:50:19.188313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.424 [2024-07-15 09:50:19.199532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.425 [2024-07-15 09:50:19.199564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.210980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.211009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.222541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.222572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.234264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.234303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.246100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.246128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.257609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.257640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.269064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.269093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.282818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.282849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.293794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.293826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.305389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.305420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.317279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.317311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.328769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.328800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.340728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.340759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.352320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.352360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.363858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.363898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.375441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.375472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.386827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.386858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.400247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.400277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.410731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.410762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.422277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.422309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.433044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.433073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.444398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.444430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.454085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.683 [2024-07-15 09:50:19.454112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.683 [2024-07-15 09:50:19.459925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.684 [2024-07-15 09:50:19.459955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.684 00:17:02.684 Latency(us) 00:17:02.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.684 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:02.684 Nvme1n1 : 5.01 11366.40 88.80 0.00 0.00 11246.20 4854.52 21262.79 00:17:02.684 =================================================================================================================== 00:17:02.684 Total : 11366.40 88.80 0.00 0.00 11246.20 4854.52 21262.79 00:17:02.942 [2024-07-15 09:50:19.467967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.467994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.475973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.476002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.484048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.484097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.492055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.492103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.500077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.500124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.508107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.508156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.516125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.516176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.524149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.524196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.532166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.532215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.540187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.540234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.548212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.548262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.556236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.556284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.564262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.564312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.572278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.572325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.580294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.580344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.588317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.588364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.596323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.596365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.604313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.604337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.612356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.612390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.620401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.620448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.628427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.942 [2024-07-15 09:50:19.628475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.942 [2024-07-15 09:50:19.636413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.943 [2024-07-15 09:50:19.636445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.943 [2024-07-15 09:50:19.644425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.943 [2024-07-15 09:50:19.644453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.943 [2024-07-15 09:50:19.652487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.943 [2024-07-15 09:50:19.652532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.943 [2024-07-15 09:50:19.660508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.943 [2024-07-15 09:50:19.660554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.943 [2024-07-15 09:50:19.668495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.943 [2024-07-15 09:50:19.668523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.943 [2024-07-15 09:50:19.676497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.943 [2024-07-15 09:50:19.676519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.943 [2024-07-15 09:50:19.684516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:02.943 [2024-07-15 09:50:19.684537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1894208) - No such process 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1894208 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:02.943 delay0 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.943 09:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:03.201 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.201 [2024-07-15 09:50:19.845051] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:09.760 Initializing NVMe Controllers 00:17:09.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:09.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:09.760 Initialization complete. Launching workers. 00:17:09.760 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 132 00:17:09.760 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 407, failed to submit 45 00:17:09.760 success 259, unsuccess 148, failed 0 00:17:09.760 09:50:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:09.760 09:50:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:09.760 09:50:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.760 09:50:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:09.760 09:50:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.760 09:50:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:09.760 09:50:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.760 09:50:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.760 rmmod nvme_tcp 00:17:09.760 rmmod nvme_fabrics 00:17:09.760 rmmod nvme_keyring 00:17:09.760 09:50:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1892883 ']' 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1892883 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1892883 ']' 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1892883 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1892883 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1892883' 00:17:09.760 killing process with pid 1892883 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1892883 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1892883 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.760 09:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.664 09:50:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:11.664 00:17:11.664 real 0m27.728s 00:17:11.664 user 0m40.790s 00:17:11.664 sys 0m8.306s 00:17:11.664 09:50:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:11.664 09:50:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:11.664 ************************************ 00:17:11.664 END TEST nvmf_zcopy 00:17:11.664 ************************************ 00:17:11.664 09:50:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:11.664 09:50:28 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:11.664 09:50:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:11.664 09:50:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:11.664 09:50:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:11.664 ************************************ 00:17:11.664 START TEST nvmf_nmic 00:17:11.664 ************************************ 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:11.664 * Looking for test storage... 00:17:11.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.664 09:50:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:11.665 09:50:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:13.564 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:13.564 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:13.564 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:13.564 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.564 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:13.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:17:13.822 00:17:13.822 --- 10.0.0.2 ping statistics --- 00:17:13.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.822 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:17:13.822 00:17:13.822 --- 10.0.0.1 ping statistics --- 00:17:13.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.822 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1897462 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1897462 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1897462 ']' 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.822 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:13.822 [2024-07-15 09:50:30.508530] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:13.822 [2024-07-15 09:50:30.508611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.822 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.822 [2024-07-15 09:50:30.552787] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:13.822 [2024-07-15 09:50:30.582025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.080 [2024-07-15 09:50:30.679270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.080 [2024-07-15 09:50:30.679328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.080 [2024-07-15 09:50:30.679357] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.080 [2024-07-15 09:50:30.679368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.080 [2024-07-15 09:50:30.679377] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.080 [2024-07-15 09:50:30.679450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.080 [2024-07-15 09:50:30.679482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.080 [2024-07-15 09:50:30.679540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.080 [2024-07-15 09:50:30.679542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:14.080 [2024-07-15 09:50:30.834746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.080 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:14.336 Malloc0 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:14.336 [2024-07-15 09:50:30.888514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:14.336 test case1: single bdev can't be used in multiple subsystems 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:14.336 [2024-07-15 09:50:30.912345] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:14.336 [2024-07-15 09:50:30.912374] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:14.336 [2024-07-15 09:50:30.912388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.336 request: 00:17:14.336 { 00:17:14.336 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:14.336 "namespace": { 00:17:14.336 "bdev_name": "Malloc0", 00:17:14.336 "no_auto_visible": false 00:17:14.336 }, 00:17:14.336 "method": "nvmf_subsystem_add_ns", 00:17:14.336 "req_id": 1 00:17:14.336 } 00:17:14.336 Got JSON-RPC error response 00:17:14.336 response: 00:17:14.336 { 00:17:14.336 "code": -32602, 00:17:14.336 "message": "Invalid parameters" 00:17:14.336 } 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:14.336 Adding namespace failed - expected result. 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:14.336 test case2: host connect to nvmf target in multiple paths 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:14.336 [2024-07-15 09:50:30.924462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.336 09:50:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.898 09:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:15.827 09:50:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.827 09:50:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.827 09:50:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.827 09:50:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:15.827 09:50:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:17.721 09:50:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:17.721 09:50:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:17.721 09:50:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.721 09:50:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:17.721 09:50:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.721 09:50:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:17.721 09:50:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:17.721 [global] 00:17:17.721 thread=1 00:17:17.721 invalidate=1 00:17:17.721 rw=write 00:17:17.721 time_based=1 00:17:17.721 runtime=1 00:17:17.721 ioengine=libaio 00:17:17.721 direct=1 00:17:17.721 bs=4096 00:17:17.721 iodepth=1 00:17:17.721 norandommap=0 00:17:17.721 numjobs=1 00:17:17.721 00:17:17.721 verify_dump=1 00:17:17.721 verify_backlog=512 00:17:17.721 verify_state_save=0 00:17:17.721 do_verify=1 00:17:17.721 verify=crc32c-intel 00:17:17.721 [job0] 00:17:17.721 filename=/dev/nvme0n1 00:17:17.721 Could not set queue depth (nvme0n1) 00:17:17.979 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.979 fio-3.35 00:17:17.979 Starting 1 thread 00:17:18.930 00:17:18.930 job0: (groupid=0, jobs=1): err= 0: pid=1898097: Mon Jul 15 09:50:35 2024 00:17:18.930 read: IOPS=1541, BW=6166KiB/s (6314kB/s)(6172KiB/1001msec) 00:17:18.930 slat (nsec): min=5821, max=69570, avg=18709.82, stdev=10050.36 00:17:18.930 clat (usec): min=245, max=549, avg=332.44, stdev=44.00 00:17:18.931 lat (usec): min=261, max=582, avg=351.15, stdev=49.84 00:17:18.931 clat percentiles (usec): 00:17:18.931 | 1.00th=[ 258], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 306], 00:17:18.931 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:17:18.931 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 388], 95.00th=[ 404], 00:17:18.931 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 537], 99.95th=[ 553], 00:17:18.931 | 99.99th=[ 553] 00:17:18.931 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:17:18.931 slat (nsec): min=7778, max=63973, avg=18384.42, stdev=6604.31 00:17:18.931 clat (usec): min=162, max=3065, avg=196.87, stdev=69.42 00:17:18.931 lat (usec): min=170, max=3094, avg=215.26, stdev=71.46 00:17:18.931 clat percentiles (usec): 00:17:18.931 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 180], 00:17:18.931 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:17:18.931 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 229], 95.00th=[ 245], 00:17:18.931 | 99.00th=[ 326], 99.50th=[ 363], 99.90th=[ 379], 99.95th=[ 379], 00:17:18.931 | 99.99th=[ 3064] 00:17:18.931 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:17:18.931 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:18.931 lat (usec) : 250=54.44%, 500=44.86%, 750=0.67% 00:17:18.931 lat (msec) : 4=0.03% 00:17:18.931 cpu : usr=3.60%, sys=6.70%, ctx=3593, majf=0, minf=2 00:17:18.931 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.931 issued rwts: total=1543,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.931 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.931 00:17:18.931 Run status group 0 (all jobs): 00:17:18.931 READ: bw=6166KiB/s (6314kB/s), 6166KiB/s-6166KiB/s (6314kB/s-6314kB/s), io=6172KiB (6320kB), run=1001-1001msec 00:17:18.931 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:17:18.931 00:17:18.931 Disk stats (read/write): 00:17:18.931 nvme0n1: ios=1588/1595, merge=0/0, ticks=811/299, in_queue=1110, util=99.00% 00:17:18.931 09:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.206 rmmod nvme_tcp 00:17:19.206 rmmod nvme_fabrics 00:17:19.206 rmmod nvme_keyring 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1897462 ']' 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1897462 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1897462 ']' 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1897462 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1897462 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1897462' 00:17:19.206 killing process with pid 1897462 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1897462 00:17:19.206 09:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1897462 00:17:19.465 09:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.465 09:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.465 09:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.465 09:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.465 09:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.465 09:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.465 09:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.465 09:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.023 09:50:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.023 00:17:22.023 real 0m9.833s 00:17:22.023 user 0m22.509s 00:17:22.023 sys 0m2.396s 00:17:22.023 09:50:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.023 09:50:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:22.023 ************************************ 00:17:22.023 END TEST nvmf_nmic 00:17:22.023 ************************************ 00:17:22.023 09:50:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:22.023 09:50:38 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:22.023 09:50:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:22.023 09:50:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.023 09:50:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.023 ************************************ 00:17:22.023 START TEST nvmf_fio_target 00:17:22.023 ************************************ 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:22.023 * Looking for test storage... 00:17:22.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:22.023 09:50:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:23.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:23.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.924 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:23.925 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:23.925 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:23.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:17:23.925 00:17:23.925 --- 10.0.0.2 ping statistics --- 00:17:23.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.925 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:23.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:17:23.925 00:17:23.925 --- 10.0.0.1 ping statistics --- 00:17:23.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.925 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1900168 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1900168 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1900168 ']' 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.925 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.925 [2024-07-15 09:50:40.500424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:23.925 [2024-07-15 09:50:40.500521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.925 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.925 [2024-07-15 09:50:40.539695] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:23.925 [2024-07-15 09:50:40.566568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:23.925 [2024-07-15 09:50:40.655581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.925 [2024-07-15 09:50:40.655654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.925 [2024-07-15 09:50:40.655667] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.925 [2024-07-15 09:50:40.655677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.925 [2024-07-15 09:50:40.655686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.925 [2024-07-15 09:50:40.655806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.925 [2024-07-15 09:50:40.655836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.925 [2024-07-15 09:50:40.655961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.925 [2024-07-15 09:50:40.655963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.183 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.183 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:24.183 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.183 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:24.183 09:50:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.183 09:50:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.183 09:50:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:24.440 [2024-07-15 09:50:41.045491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.441 09:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:24.698 09:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:24.698 09:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:24.956 09:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:24.956 09:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:25.213 09:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:25.213 09:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:25.471 09:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:25.471 09:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:25.729 09:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:25.987 09:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:25.987 09:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:26.245 09:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:26.245 09:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:26.503 09:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:26.503 09:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:26.760 09:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:27.016 09:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:27.016 09:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:27.272 09:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:27.272 09:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:27.529 09:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.786 [2024-07-15 09:50:44.379936] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.786 09:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:28.043 09:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:28.300 09:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:28.865 09:50:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:28.865 09:50:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:28.865 09:50:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.865 09:50:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:28.865 09:50:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:28.865 09:50:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:31.390 09:50:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:31.390 09:50:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:31.391 09:50:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.391 09:50:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:31.391 09:50:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.391 09:50:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:31.391 09:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:31.391 [global] 00:17:31.391 thread=1 00:17:31.391 invalidate=1 00:17:31.391 rw=write 00:17:31.391 time_based=1 00:17:31.391 runtime=1 00:17:31.391 ioengine=libaio 00:17:31.391 direct=1 00:17:31.391 bs=4096 00:17:31.391 iodepth=1 00:17:31.391 norandommap=0 00:17:31.391 numjobs=1 00:17:31.391 00:17:31.391 verify_dump=1 00:17:31.391 verify_backlog=512 00:17:31.391 verify_state_save=0 00:17:31.391 do_verify=1 00:17:31.391 verify=crc32c-intel 00:17:31.391 [job0] 00:17:31.391 filename=/dev/nvme0n1 00:17:31.391 [job1] 00:17:31.391 filename=/dev/nvme0n2 00:17:31.391 [job2] 00:17:31.391 filename=/dev/nvme0n3 00:17:31.391 [job3] 00:17:31.391 filename=/dev/nvme0n4 00:17:31.391 Could not set queue depth (nvme0n1) 00:17:31.391 Could not set queue depth (nvme0n2) 00:17:31.391 Could not set queue depth (nvme0n3) 00:17:31.391 Could not set queue depth (nvme0n4) 00:17:31.391 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:31.391 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:31.391 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:31.391 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:31.391 fio-3.35 00:17:31.391 Starting 4 threads 00:17:32.324 00:17:32.324 job0: (groupid=0, jobs=1): err= 0: pid=1901243: Mon Jul 15 09:50:49 2024 00:17:32.324 read: IOPS=23, BW=93.1KiB/s (95.3kB/s)(96.0KiB/1031msec) 00:17:32.324 slat (nsec): min=10276, max=37231, avg=26764.63, stdev=9627.86 00:17:32.324 clat (usec): min=366, max=42209, avg=37254.97, stdev=12152.56 00:17:32.324 lat (usec): min=384, max=42220, avg=37281.74, stdev=12153.30 00:17:32.324 clat percentiles (usec): 00:17:32.324 | 1.00th=[ 367], 5.00th=[ 449], 10.00th=[20317], 20.00th=[41157], 00:17:32.324 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:17:32.324 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:32.324 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:32.324 | 99.99th=[42206] 00:17:32.324 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:17:32.324 slat (usec): min=7, max=1201, avg=25.64, stdev=52.46 00:17:32.324 clat (usec): min=175, max=1157, avg=233.63, stdev=56.77 00:17:32.324 lat (usec): min=185, max=1426, avg=259.27, stdev=78.20 00:17:32.324 clat percentiles (usec): 00:17:32.324 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 217], 00:17:32.324 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:17:32.324 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 277], 00:17:32.324 | 99.00th=[ 449], 99.50th=[ 603], 99.90th=[ 1156], 99.95th=[ 1156], 00:17:32.324 | 99.99th=[ 1156] 00:17:32.324 bw ( KiB/s): min= 4096, max= 4096, per=25.93%, avg=4096.00, stdev= 0.00, samples=1 00:17:32.324 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:32.324 lat (usec) : 250=84.33%, 500=10.82%, 750=0.56% 00:17:32.324 lat (msec) : 2=0.19%, 50=4.10% 00:17:32.324 cpu : usr=1.07%, sys=1.26%, ctx=538, majf=0, minf=1 00:17:32.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.324 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:32.324 job1: (groupid=0, jobs=1): err= 0: pid=1901244: Mon Jul 15 09:50:49 2024 00:17:32.324 read: IOPS=1443, BW=5774KiB/s (5913kB/s)(5780KiB/1001msec) 00:17:32.324 slat (nsec): min=5809, max=55750, avg=15185.61, stdev=7428.71 00:17:32.324 clat (usec): min=287, max=997, avg=383.62, stdev=108.29 00:17:32.324 lat (usec): min=296, max=1053, avg=398.80, stdev=113.82 00:17:32.324 clat percentiles (usec): 00:17:32.324 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:17:32.324 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 359], 00:17:32.324 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 519], 95.00th=[ 644], 00:17:32.324 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 898], 99.95th=[ 996], 00:17:32.324 | 99.99th=[ 996] 00:17:32.324 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:32.324 slat (nsec): min=8607, max=67689, avg=19925.34, stdev=7962.00 00:17:32.324 clat (usec): min=186, max=626, avg=246.19, stdev=50.68 00:17:32.324 lat (usec): min=195, max=654, avg=266.12, stdev=55.82 00:17:32.324 clat percentiles (usec): 00:17:32.324 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:17:32.324 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 243], 00:17:32.324 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 310], 95.00th=[ 351], 00:17:32.324 | 99.00th=[ 441], 99.50th=[ 478], 99.90th=[ 570], 99.95th=[ 627], 00:17:32.324 | 99.99th=[ 627] 00:17:32.324 bw ( KiB/s): min= 6664, max= 6664, per=42.18%, avg=6664.00, stdev= 0.00, samples=1 00:17:32.324 iops : min= 1666, max= 1666, avg=1666.00, stdev= 0.00, samples=1 00:17:32.324 lat (usec) : 250=34.72%, 500=59.88%, 750=4.16%, 1000=1.24% 00:17:32.324 cpu : usr=4.10%, sys=7.00%, ctx=2982, majf=0, minf=1 00:17:32.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.324 issued rwts: total=1445,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:32.324 job2: (groupid=0, jobs=1): err= 0: pid=1901247: Mon Jul 15 09:50:49 2024 00:17:32.324 read: IOPS=18, BW=73.3KiB/s (75.0kB/s)(76.0KiB/1037msec) 00:17:32.324 slat (nsec): min=14504, max=35764, avg=27770.68, stdev=9105.88 00:17:32.324 clat (usec): min=40881, max=41361, avg=40980.92, stdev=100.80 00:17:32.324 lat (usec): min=40917, max=41386, avg=41008.69, stdev=98.49 00:17:32.324 clat percentiles (usec): 00:17:32.324 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:32.324 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:32.324 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:32.324 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:32.324 | 99.99th=[41157] 00:17:32.324 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:17:32.324 slat (usec): min=20, max=38707, avg=162.95, stdev=2131.32 00:17:32.324 clat (usec): min=216, max=525, avg=332.72, stdev=48.89 00:17:32.324 lat (usec): min=241, max=38994, avg=495.67, stdev=2131.77 00:17:32.324 clat percentiles (usec): 00:17:32.324 | 1.00th=[ 237], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 297], 00:17:32.324 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 330], 00:17:32.324 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 420], 00:17:32.324 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 529], 99.95th=[ 529], 00:17:32.324 | 99.99th=[ 529] 00:17:32.324 bw ( KiB/s): min= 4096, max= 4096, per=25.93%, avg=4096.00, stdev= 0.00, samples=1 00:17:32.324 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:32.324 lat (usec) : 250=1.51%, 500=94.73%, 750=0.19% 00:17:32.324 lat (msec) : 50=3.58% 00:17:32.324 cpu : usr=0.68%, sys=2.03%, ctx=534, majf=0, minf=1 00:17:32.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.324 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:32.324 job3: (groupid=0, jobs=1): err= 0: pid=1901248: Mon Jul 15 09:50:49 2024 00:17:32.324 read: IOPS=1381, BW=5526KiB/s (5659kB/s)(5532KiB/1001msec) 00:17:32.324 slat (nsec): min=5988, max=65484, avg=15666.39, stdev=8253.33 00:17:32.324 clat (usec): min=282, max=40709, avg=398.02, stdev=1085.52 00:17:32.324 lat (usec): min=297, max=40724, avg=413.69, stdev=1085.67 00:17:32.324 clat percentiles (usec): 00:17:32.324 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 338], 00:17:32.324 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:17:32.324 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 416], 95.00th=[ 478], 00:17:32.324 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[ 594], 99.95th=[40633], 00:17:32.324 | 99.99th=[40633] 00:17:32.324 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:32.324 slat (nsec): min=8140, max=55472, avg=19560.87, stdev=8000.05 00:17:32.324 clat (usec): min=187, max=2305, avg=249.87, stdev=68.25 00:17:32.324 lat (usec): min=197, max=2327, avg=269.43, stdev=71.49 00:17:32.324 clat percentiles (usec): 00:17:32.324 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:17:32.324 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 249], 00:17:32.324 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 318], 00:17:32.324 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 742], 99.95th=[ 2311], 00:17:32.324 | 99.99th=[ 2311] 00:17:32.324 bw ( KiB/s): min= 6528, max= 6528, per=41.32%, avg=6528.00, stdev= 0.00, samples=1 00:17:32.324 iops : min= 1632, max= 1632, avg=1632.00, stdev= 0.00, samples=1 00:17:32.324 lat (usec) : 250=31.86%, 500=67.49%, 750=0.58% 00:17:32.324 lat (msec) : 4=0.03%, 50=0.03% 00:17:32.324 cpu : usr=3.60%, sys=6.60%, ctx=2920, majf=0, minf=2 00:17:32.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.324 issued rwts: total=1383,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:32.324 00:17:32.324 Run status group 0 (all jobs): 00:17:32.324 READ: bw=10.8MiB/s (11.3MB/s), 73.3KiB/s-5774KiB/s (75.0kB/s-5913kB/s), io=11.2MiB (11.8MB), run=1001-1037msec 00:17:32.324 WRITE: bw=15.4MiB/s (16.2MB/s), 1975KiB/s-6138KiB/s (2022kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1037msec 00:17:32.324 00:17:32.324 Disk stats (read/write): 00:17:32.324 nvme0n1: ios=72/512, merge=0/0, ticks=806/112, in_queue=918, util=84.87% 00:17:32.324 nvme0n2: ios=1047/1474, merge=0/0, ticks=1314/352, in_queue=1666, util=88.90% 00:17:32.324 nvme0n3: ios=66/512, merge=0/0, ticks=972/149, in_queue=1121, util=94.33% 00:17:32.324 nvme0n4: ios=1048/1398, merge=0/0, ticks=1322/326, in_queue=1648, util=94.17% 00:17:32.324 09:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:32.324 [global] 00:17:32.324 thread=1 00:17:32.324 invalidate=1 00:17:32.324 rw=randwrite 00:17:32.324 time_based=1 00:17:32.324 runtime=1 00:17:32.324 ioengine=libaio 00:17:32.324 direct=1 00:17:32.324 bs=4096 00:17:32.324 iodepth=1 00:17:32.324 norandommap=0 00:17:32.324 numjobs=1 00:17:32.324 00:17:32.324 verify_dump=1 00:17:32.324 verify_backlog=512 00:17:32.324 verify_state_save=0 00:17:32.324 do_verify=1 00:17:32.324 verify=crc32c-intel 00:17:32.324 [job0] 00:17:32.324 filename=/dev/nvme0n1 00:17:32.324 [job1] 00:17:32.324 filename=/dev/nvme0n2 00:17:32.324 [job2] 00:17:32.324 filename=/dev/nvme0n3 00:17:32.324 [job3] 00:17:32.325 filename=/dev/nvme0n4 00:17:32.582 Could not set queue depth (nvme0n1) 00:17:32.582 Could not set queue depth (nvme0n2) 00:17:32.583 Could not set queue depth (nvme0n3) 00:17:32.583 Could not set queue depth (nvme0n4) 00:17:32.583 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:32.583 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:32.583 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:32.583 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:32.583 fio-3.35 00:17:32.583 Starting 4 threads 00:17:33.955 00:17:33.955 job0: (groupid=0, jobs=1): err= 0: pid=1901472: Mon Jul 15 09:50:50 2024 00:17:33.955 read: IOPS=307, BW=1229KiB/s (1258kB/s)(1268KiB/1032msec) 00:17:33.955 slat (nsec): min=7970, max=49962, avg=13312.09, stdev=5751.67 00:17:33.955 clat (usec): min=308, max=41363, avg=2799.59, stdev=9659.93 00:17:33.955 lat (usec): min=317, max=41379, avg=2812.91, stdev=9661.99 00:17:33.955 clat percentiles (usec): 00:17:33.955 | 1.00th=[ 326], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 347], 00:17:33.955 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 371], 00:17:33.955 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[41157], 00:17:33.955 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:33.955 | 99.99th=[41157] 00:17:33.955 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:17:33.955 slat (nsec): min=6687, max=52519, avg=18096.75, stdev=5418.79 00:17:33.955 clat (usec): min=184, max=443, avg=247.04, stdev=33.31 00:17:33.955 lat (usec): min=200, max=468, avg=265.14, stdev=33.03 00:17:33.955 clat percentiles (usec): 00:17:33.955 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 227], 00:17:33.955 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:17:33.955 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 302], 00:17:33.955 | 99.00th=[ 379], 99.50th=[ 433], 99.90th=[ 445], 99.95th=[ 445], 00:17:33.955 | 99.99th=[ 445] 00:17:33.955 bw ( KiB/s): min= 4087, max= 4087, per=31.34%, avg=4087.00, stdev= 0.00, samples=1 00:17:33.955 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:33.955 lat (usec) : 250=38.72%, 500=58.99% 00:17:33.955 lat (msec) : 50=2.29% 00:17:33.955 cpu : usr=1.16%, sys=1.07%, ctx=829, majf=0, minf=1 00:17:33.955 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.955 issued rwts: total=317,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.955 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.955 job1: (groupid=0, jobs=1): err= 0: pid=1901473: Mon Jul 15 09:50:50 2024 00:17:33.955 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:17:33.955 slat (nsec): min=15592, max=49619, avg=25267.05, stdev=10833.18 00:17:33.955 clat (usec): min=40709, max=42975, avg=41087.65, stdev=475.08 00:17:33.955 lat (usec): min=40726, max=43008, avg=41112.92, stdev=475.60 00:17:33.955 clat percentiles (usec): 00:17:33.955 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:33.955 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:33.955 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:17:33.955 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:33.955 | 99.99th=[42730] 00:17:33.955 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:17:33.955 slat (nsec): min=5888, max=49062, avg=16159.79, stdev=4671.58 00:17:33.955 clat (usec): min=180, max=460, avg=213.27, stdev=23.34 00:17:33.955 lat (usec): min=192, max=487, avg=229.43, stdev=23.13 00:17:33.955 clat percentiles (usec): 00:17:33.955 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:17:33.955 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:17:33.956 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 251], 00:17:33.956 | 99.00th=[ 289], 99.50th=[ 322], 99.90th=[ 461], 99.95th=[ 461], 00:17:33.956 | 99.99th=[ 461] 00:17:33.956 bw ( KiB/s): min= 4096, max= 4096, per=31.40%, avg=4096.00, stdev= 0.00, samples=1 00:17:33.956 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:33.956 lat (usec) : 250=90.82%, 500=5.06% 00:17:33.956 lat (msec) : 50=4.12% 00:17:33.956 cpu : usr=0.29%, sys=0.98%, ctx=534, majf=0, minf=1 00:17:33.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.956 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.956 job2: (groupid=0, jobs=1): err= 0: pid=1901474: Mon Jul 15 09:50:50 2024 00:17:33.956 read: IOPS=680, BW=2721KiB/s (2787kB/s)(2724KiB/1001msec) 00:17:33.956 slat (nsec): min=5406, max=68242, avg=14619.42, stdev=8462.64 00:17:33.956 clat (usec): min=268, max=41089, avg=1007.40, stdev=5121.62 00:17:33.956 lat (usec): min=280, max=41107, avg=1022.02, stdev=5123.11 00:17:33.956 clat percentiles (usec): 00:17:33.956 | 1.00th=[ 285], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 322], 00:17:33.956 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 351], 00:17:33.956 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 437], 95.00th=[ 461], 00:17:33.956 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:33.956 | 99.99th=[41157] 00:17:33.956 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:17:33.956 slat (nsec): min=7372, max=79880, avg=20375.51, stdev=9418.87 00:17:33.956 clat (usec): min=150, max=447, avg=268.11, stdev=59.96 00:17:33.956 lat (usec): min=181, max=474, avg=288.49, stdev=63.35 00:17:33.956 clat percentiles (usec): 00:17:33.956 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 215], 00:17:33.956 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 255], 60.00th=[ 273], 00:17:33.956 | 70.00th=[ 297], 80.00th=[ 322], 90.00th=[ 359], 95.00th=[ 383], 00:17:33.956 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 433], 99.95th=[ 449], 00:17:33.956 | 99.99th=[ 449] 00:17:33.956 bw ( KiB/s): min= 4087, max= 4087, per=31.34%, avg=4087.00, stdev= 0.00, samples=1 00:17:33.956 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:33.956 lat (usec) : 250=27.39%, 500=71.61%, 750=0.35% 00:17:33.956 lat (msec) : 50=0.65% 00:17:33.956 cpu : usr=1.80%, sys=3.70%, ctx=1705, majf=0, minf=2 00:17:33.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.956 issued rwts: total=681,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.956 job3: (groupid=0, jobs=1): err= 0: pid=1901475: Mon Jul 15 09:50:50 2024 00:17:33.956 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:33.956 slat (nsec): min=7122, max=48476, avg=15980.79, stdev=5913.54 00:17:33.956 clat (usec): min=308, max=41119, avg=630.33, stdev=3100.71 00:17:33.956 lat (usec): min=316, max=41135, avg=646.31, stdev=3101.50 00:17:33.956 clat percentiles (usec): 00:17:33.956 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 359], 00:17:33.956 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 383], 00:17:33.956 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 465], 95.00th=[ 498], 00:17:33.956 | 99.00th=[ 742], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:17:33.956 | 99.99th=[41157] 00:17:33.956 write: IOPS=1315, BW=5263KiB/s (5389kB/s)(5268KiB/1001msec); 0 zone resets 00:17:33.956 slat (nsec): min=7235, max=58167, avg=17634.58, stdev=7744.75 00:17:33.956 clat (usec): min=178, max=3361, avg=230.35, stdev=93.65 00:17:33.956 lat (usec): min=187, max=3382, avg=247.98, stdev=95.55 00:17:33.956 clat percentiles (usec): 00:17:33.956 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:17:33.956 | 30.00th=[ 204], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 233], 00:17:33.956 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 289], 00:17:33.956 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 515], 99.95th=[ 3359], 00:17:33.956 | 99.99th=[ 3359] 00:17:33.956 bw ( KiB/s): min= 8192, max= 8192, per=62.81%, avg=8192.00, stdev= 0.00, samples=1 00:17:33.956 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:33.956 lat (usec) : 250=45.45%, 500=52.58%, 750=1.54%, 1000=0.04% 00:17:33.956 lat (msec) : 2=0.09%, 4=0.04%, 50=0.26% 00:17:33.956 cpu : usr=2.90%, sys=5.50%, ctx=2341, majf=0, minf=1 00:17:33.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.956 issued rwts: total=1024,1317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.956 00:17:33.956 Run status group 0 (all jobs): 00:17:33.956 READ: bw=7922KiB/s (8113kB/s), 85.9KiB/s-4092KiB/s (87.9kB/s-4190kB/s), io=8176KiB (8372kB), run=1001-1032msec 00:17:33.956 WRITE: bw=12.7MiB/s (13.4MB/s), 1984KiB/s-5263KiB/s (2032kB/s-5389kB/s), io=13.1MiB (13.8MB), run=1001-1032msec 00:17:33.956 00:17:33.956 Disk stats (read/write): 00:17:33.956 nvme0n1: ios=362/512, merge=0/0, ticks=723/126, in_queue=849, util=87.17% 00:17:33.956 nvme0n2: ios=45/512, merge=0/0, ticks=724/108, in_queue=832, util=87.30% 00:17:33.956 nvme0n3: ios=542/719, merge=0/0, ticks=698/189, in_queue=887, util=91.32% 00:17:33.956 nvme0n4: ios=861/1024, merge=0/0, ticks=564/211, in_queue=775, util=89.57% 00:17:33.956 09:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:33.956 [global] 00:17:33.956 thread=1 00:17:33.956 invalidate=1 00:17:33.956 rw=write 00:17:33.956 time_based=1 00:17:33.956 runtime=1 00:17:33.956 ioengine=libaio 00:17:33.956 direct=1 00:17:33.956 bs=4096 00:17:33.956 iodepth=128 00:17:33.956 norandommap=0 00:17:33.956 numjobs=1 00:17:33.956 00:17:33.956 verify_dump=1 00:17:33.956 verify_backlog=512 00:17:33.956 verify_state_save=0 00:17:33.956 do_verify=1 00:17:33.956 verify=crc32c-intel 00:17:33.956 [job0] 00:17:33.956 filename=/dev/nvme0n1 00:17:33.956 [job1] 00:17:33.956 filename=/dev/nvme0n2 00:17:33.956 [job2] 00:17:33.956 filename=/dev/nvme0n3 00:17:33.956 [job3] 00:17:33.956 filename=/dev/nvme0n4 00:17:33.956 Could not set queue depth (nvme0n1) 00:17:33.956 Could not set queue depth (nvme0n2) 00:17:33.956 Could not set queue depth (nvme0n3) 00:17:33.956 Could not set queue depth (nvme0n4) 00:17:33.956 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:33.956 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:33.956 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:33.956 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:33.956 fio-3.35 00:17:33.956 Starting 4 threads 00:17:35.368 00:17:35.368 job0: (groupid=0, jobs=1): err= 0: pid=1901699: Mon Jul 15 09:50:51 2024 00:17:35.368 read: IOPS=3633, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1006msec) 00:17:35.368 slat (usec): min=2, max=43132, avg=113.84, stdev=1094.28 00:17:35.368 clat (usec): min=3389, max=95859, avg=17215.76, stdev=14698.07 00:17:35.368 lat (usec): min=3400, max=95868, avg=17329.60, stdev=14796.21 00:17:35.368 clat percentiles (usec): 00:17:35.368 | 1.00th=[ 4686], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 8717], 00:17:35.368 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11863], 60.00th=[13173], 00:17:35.368 | 70.00th=[15664], 80.00th=[19530], 90.00th=[37487], 95.00th=[44303], 00:17:35.368 | 99.00th=[79168], 99.50th=[80217], 99.90th=[82314], 99.95th=[87557], 00:17:35.368 | 99.99th=[95945] 00:17:35.368 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:17:35.368 slat (usec): min=3, max=8355, avg=108.42, stdev=575.11 00:17:35.368 clat (usec): min=189, max=51779, avg=15570.42, stdev=11085.11 00:17:35.368 lat (usec): min=207, max=51821, avg=15678.84, stdev=11167.56 00:17:35.368 clat percentiles (usec): 00:17:35.368 | 1.00th=[ 832], 5.00th=[ 3818], 10.00th=[ 5211], 20.00th=[ 7963], 00:17:35.368 | 30.00th=[ 8979], 40.00th=[10421], 50.00th=[11076], 60.00th=[12911], 00:17:35.368 | 70.00th=[15795], 80.00th=[26084], 90.00th=[33162], 95.00th=[38536], 00:17:35.368 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:17:35.368 | 99.99th=[51643] 00:17:35.368 bw ( KiB/s): min=15936, max=16384, per=27.36%, avg=16160.00, stdev=316.78, samples=2 00:17:35.368 iops : min= 3984, max= 4096, avg=4040.00, stdev=79.20, samples=2 00:17:35.368 lat (usec) : 250=0.12%, 500=0.04%, 1000=1.59% 00:17:35.368 lat (msec) : 2=0.35%, 4=1.01%, 10=31.66%, 20=41.68%, 50=21.58% 00:17:35.368 lat (msec) : 100=1.97% 00:17:35.368 cpu : usr=4.08%, sys=5.87%, ctx=432, majf=0, minf=1 00:17:35.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:35.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:35.368 issued rwts: total=3655,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.368 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:35.368 job1: (groupid=0, jobs=1): err= 0: pid=1901700: Mon Jul 15 09:50:51 2024 00:17:35.368 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:17:35.368 slat (usec): min=3, max=28565, avg=186.56, stdev=1323.45 00:17:35.368 clat (msec): min=7, max=116, avg=23.99, stdev=18.74 00:17:35.368 lat (msec): min=8, max=116, avg=24.17, stdev=18.89 00:17:35.368 clat percentiles (msec): 00:17:35.368 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:17:35.368 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 17], 60.00th=[ 22], 00:17:35.368 | 70.00th=[ 26], 80.00th=[ 34], 90.00th=[ 48], 95.00th=[ 67], 00:17:35.368 | 99.00th=[ 93], 99.50th=[ 100], 99.90th=[ 116], 99.95th=[ 116], 00:17:35.368 | 99.99th=[ 116] 00:17:35.368 write: IOPS=2758, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1004msec); 0 zone resets 00:17:35.368 slat (usec): min=4, max=17612, avg=177.42, stdev=981.89 00:17:35.368 clat (usec): min=3133, max=69496, avg=23643.25, stdev=14826.36 00:17:35.368 lat (usec): min=3749, max=69508, avg=23820.68, stdev=14921.17 00:17:35.368 clat percentiles (usec): 00:17:35.368 | 1.00th=[ 5735], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:17:35.368 | 30.00th=[14615], 40.00th=[17695], 50.00th=[18482], 60.00th=[20317], 00:17:35.368 | 70.00th=[25297], 80.00th=[32900], 90.00th=[49546], 95.00th=[57934], 00:17:35.368 | 99.00th=[66323], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:17:35.368 | 99.99th=[69731] 00:17:35.368 bw ( KiB/s): min= 9688, max=11456, per=17.90%, avg=10572.00, stdev=1250.16, samples=2 00:17:35.368 iops : min= 2422, max= 2864, avg=2643.00, stdev=312.54, samples=2 00:17:35.369 lat (msec) : 4=0.17%, 10=5.29%, 20=53.36%, 50=31.91%, 100=9.16% 00:17:35.369 lat (msec) : 250=0.11% 00:17:35.369 cpu : usr=4.79%, sys=5.08%, ctx=287, majf=0, minf=1 00:17:35.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:35.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:35.369 issued rwts: total=2560,2770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:35.369 job2: (groupid=0, jobs=1): err= 0: pid=1901705: Mon Jul 15 09:50:51 2024 00:17:35.369 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:17:35.369 slat (usec): min=2, max=13257, avg=109.15, stdev=758.12 00:17:35.369 clat (usec): min=6228, max=46215, avg=14857.28, stdev=5217.83 00:17:35.369 lat (usec): min=6232, max=46221, avg=14966.43, stdev=5258.53 00:17:35.369 clat percentiles (usec): 00:17:35.369 | 1.00th=[ 8455], 5.00th=[10683], 10.00th=[11207], 20.00th=[11600], 00:17:35.369 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13435], 60.00th=[14353], 00:17:35.369 | 70.00th=[14746], 80.00th=[15795], 90.00th=[20579], 95.00th=[24249], 00:17:35.369 | 99.00th=[33817], 99.50th=[37487], 99.90th=[45876], 99.95th=[46400], 00:17:35.369 | 99.99th=[46400] 00:17:35.369 write: IOPS=4822, BW=18.8MiB/s (19.8MB/s)(18.9MiB/1004msec); 0 zone resets 00:17:35.369 slat (usec): min=3, max=10654, avg=79.16, stdev=459.87 00:17:35.369 clat (usec): min=579, max=58433, avg=12082.27, stdev=5870.29 00:17:35.369 lat (usec): min=623, max=58448, avg=12161.43, stdev=5886.81 00:17:35.369 clat percentiles (usec): 00:17:35.369 | 1.00th=[ 1401], 5.00th=[ 4113], 10.00th=[ 6325], 20.00th=[ 9110], 00:17:35.369 | 30.00th=[10552], 40.00th=[11600], 50.00th=[12125], 60.00th=[12649], 00:17:35.369 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15533], 95.00th=[17171], 00:17:35.369 | 99.00th=[44827], 99.50th=[51643], 99.90th=[58459], 99.95th=[58459], 00:17:35.369 | 99.99th=[58459] 00:17:35.369 bw ( KiB/s): min=17232, max=20480, per=31.93%, avg=18856.00, stdev=2296.68, samples=2 00:17:35.369 iops : min= 4308, max= 5120, avg=4714.00, stdev=574.17, samples=2 00:17:35.369 lat (usec) : 750=0.03%, 1000=0.04% 00:17:35.369 lat (msec) : 2=0.98%, 4=1.32%, 10=12.19%, 20=78.22%, 50=6.80% 00:17:35.369 lat (msec) : 100=0.40% 00:17:35.369 cpu : usr=4.49%, sys=6.78%, ctx=490, majf=0, minf=1 00:17:35.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:35.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:35.369 issued rwts: total=4608,4842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:35.369 job3: (groupid=0, jobs=1): err= 0: pid=1901710: Mon Jul 15 09:50:51 2024 00:17:35.369 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:17:35.369 slat (usec): min=2, max=17130, avg=176.58, stdev=1092.34 00:17:35.369 clat (usec): min=7589, max=83732, avg=23514.72, stdev=11666.38 00:17:35.369 lat (usec): min=7599, max=83772, avg=23691.29, stdev=11744.54 00:17:35.369 clat percentiles (usec): 00:17:35.369 | 1.00th=[ 8586], 5.00th=[10945], 10.00th=[13698], 20.00th=[16581], 00:17:35.369 | 30.00th=[18482], 40.00th=[19530], 50.00th=[19792], 60.00th=[21365], 00:17:35.369 | 70.00th=[24511], 80.00th=[27132], 90.00th=[38011], 95.00th=[51643], 00:17:35.369 | 99.00th=[80217], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:17:35.369 | 99.99th=[83362] 00:17:35.369 write: IOPS=3127, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1006msec); 0 zone resets 00:17:35.369 slat (usec): min=3, max=16483, avg=131.13, stdev=836.05 00:17:35.369 clat (usec): min=1259, max=58561, avg=17350.65, stdev=7956.24 00:17:35.369 lat (usec): min=1272, max=58605, avg=17481.77, stdev=8041.75 00:17:35.369 clat percentiles (usec): 00:17:35.369 | 1.00th=[ 6587], 5.00th=[ 7767], 10.00th=[10421], 20.00th=[12649], 00:17:35.369 | 30.00th=[13960], 40.00th=[14484], 50.00th=[15533], 60.00th=[16057], 00:17:35.369 | 70.00th=[17171], 80.00th=[20841], 90.00th=[26346], 95.00th=[32375], 00:17:35.369 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50070], 99.95th=[57410], 00:17:35.369 | 99.99th=[58459] 00:17:35.369 bw ( KiB/s): min= 8904, max=15672, per=20.81%, avg=12288.00, stdev=4785.70, samples=2 00:17:35.369 iops : min= 2226, max= 3918, avg=3072.00, stdev=1196.42, samples=2 00:17:35.369 lat (msec) : 2=0.18%, 10=4.94%, 20=59.20%, 50=33.10%, 100=2.59% 00:17:35.369 cpu : usr=3.98%, sys=5.27%, ctx=297, majf=0, minf=1 00:17:35.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:17:35.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:35.369 issued rwts: total=3072,3146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:35.369 00:17:35.369 Run status group 0 (all jobs): 00:17:35.369 READ: bw=54.0MiB/s (56.6MB/s), 9.96MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=54.3MiB (56.9MB), run=1004-1006msec 00:17:35.369 WRITE: bw=57.7MiB/s (60.5MB/s), 10.8MiB/s-18.8MiB/s (11.3MB/s-19.8MB/s), io=58.0MiB (60.8MB), run=1004-1006msec 00:17:35.369 00:17:35.369 Disk stats (read/write): 00:17:35.369 nvme0n1: ios=3496/3584, merge=0/0, ticks=36383/26094, in_queue=62477, util=99.60% 00:17:35.369 nvme0n2: ios=1556/1966, merge=0/0, ticks=16320/18542, in_queue=34862, util=86.78% 00:17:35.369 nvme0n3: ios=3794/4096, merge=0/0, ticks=33634/27675, in_queue=61309, util=88.47% 00:17:35.369 nvme0n4: ios=2532/2560, merge=0/0, ticks=23679/18558, in_queue=42237, util=94.82% 00:17:35.369 09:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:35.369 [global] 00:17:35.369 thread=1 00:17:35.369 invalidate=1 00:17:35.369 rw=randwrite 00:17:35.369 time_based=1 00:17:35.369 runtime=1 00:17:35.369 ioengine=libaio 00:17:35.369 direct=1 00:17:35.369 bs=4096 00:17:35.369 iodepth=128 00:17:35.369 norandommap=0 00:17:35.369 numjobs=1 00:17:35.369 00:17:35.369 verify_dump=1 00:17:35.369 verify_backlog=512 00:17:35.369 verify_state_save=0 00:17:35.369 do_verify=1 00:17:35.369 verify=crc32c-intel 00:17:35.369 [job0] 00:17:35.369 filename=/dev/nvme0n1 00:17:35.369 [job1] 00:17:35.369 filename=/dev/nvme0n2 00:17:35.369 [job2] 00:17:35.369 filename=/dev/nvme0n3 00:17:35.369 [job3] 00:17:35.369 filename=/dev/nvme0n4 00:17:35.369 Could not set queue depth (nvme0n1) 00:17:35.369 Could not set queue depth (nvme0n2) 00:17:35.369 Could not set queue depth (nvme0n3) 00:17:35.369 Could not set queue depth (nvme0n4) 00:17:35.627 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:35.627 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:35.627 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:35.627 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:35.627 fio-3.35 00:17:35.627 Starting 4 threads 00:17:37.001 00:17:37.001 job0: (groupid=0, jobs=1): err= 0: pid=1902039: Mon Jul 15 09:50:53 2024 00:17:37.001 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:17:37.001 slat (usec): min=2, max=10586, avg=96.70, stdev=648.20 00:17:37.001 clat (usec): min=4208, max=27959, avg=12874.68, stdev=3407.93 00:17:37.001 lat (usec): min=4213, max=27994, avg=12971.38, stdev=3441.36 00:17:37.001 clat percentiles (usec): 00:17:37.001 | 1.00th=[ 5800], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10683], 00:17:37.001 | 30.00th=[11207], 40.00th=[11731], 50.00th=[11994], 60.00th=[12518], 00:17:37.001 | 70.00th=[13173], 80.00th=[15401], 90.00th=[18482], 95.00th=[20317], 00:17:37.001 | 99.00th=[22152], 99.50th=[22938], 99.90th=[24249], 99.95th=[24249], 00:17:37.001 | 99.99th=[27919] 00:17:37.001 write: IOPS=5404, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1004msec); 0 zone resets 00:17:37.001 slat (usec): min=3, max=9793, avg=80.06, stdev=441.16 00:17:37.001 clat (usec): min=871, max=23432, avg=11287.81, stdev=2246.97 00:17:37.001 lat (usec): min=884, max=23447, avg=11367.87, stdev=2268.34 00:17:37.001 clat percentiles (usec): 00:17:37.001 | 1.00th=[ 3785], 5.00th=[ 6783], 10.00th=[ 8160], 20.00th=[10421], 00:17:37.001 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:17:37.001 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:17:37.001 | 99.00th=[17171], 99.50th=[18482], 99.90th=[21627], 99.95th=[22938], 00:17:37.001 | 99.99th=[23462] 00:17:37.001 bw ( KiB/s): min=21120, max=21272, per=28.23%, avg=21196.00, stdev=107.48, samples=2 00:17:37.001 iops : min= 5280, max= 5318, avg=5299.00, stdev=26.87, samples=2 00:17:37.001 lat (usec) : 1000=0.04% 00:17:37.001 lat (msec) : 2=0.02%, 4=0.61%, 10=14.27%, 20=81.79%, 50=3.27% 00:17:37.001 cpu : usr=5.78%, sys=9.97%, ctx=522, majf=0, minf=1 00:17:37.001 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:37.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:37.001 issued rwts: total=5120,5426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:37.001 job1: (groupid=0, jobs=1): err= 0: pid=1902055: Mon Jul 15 09:50:53 2024 00:17:37.001 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:17:37.001 slat (usec): min=2, max=7960, avg=105.86, stdev=573.34 00:17:37.001 clat (usec): min=6299, max=74875, avg=14050.33, stdev=4839.62 00:17:37.001 lat (usec): min=6308, max=74880, avg=14156.19, stdev=4861.32 00:17:37.001 clat percentiles (usec): 00:17:37.001 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11338], 00:17:37.001 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13173], 60.00th=[13960], 00:17:37.001 | 70.00th=[14484], 80.00th=[15795], 90.00th=[19006], 95.00th=[20579], 00:17:37.001 | 99.00th=[26084], 99.50th=[30278], 99.90th=[70779], 99.95th=[70779], 00:17:37.001 | 99.99th=[74974] 00:17:37.001 write: IOPS=4800, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1004msec); 0 zone resets 00:17:37.001 slat (usec): min=3, max=9067, avg=95.34, stdev=497.76 00:17:37.001 clat (usec): min=602, max=31910, avg=12930.86, stdev=3453.66 00:17:37.001 lat (usec): min=6371, max=31917, avg=13026.20, stdev=3470.66 00:17:37.001 clat percentiles (usec): 00:17:37.001 | 1.00th=[ 7701], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[10814], 00:17:37.001 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12256], 60.00th=[12911], 00:17:37.001 | 70.00th=[13304], 80.00th=[14091], 90.00th=[15533], 95.00th=[20579], 00:17:37.001 | 99.00th=[28705], 99.50th=[30278], 99.90th=[31851], 99.95th=[31851], 00:17:37.001 | 99.99th=[31851] 00:17:37.001 bw ( KiB/s): min=17056, max=20480, per=24.99%, avg=18768.00, stdev=2421.13, samples=2 00:17:37.001 iops : min= 4264, max= 5120, avg=4692.00, stdev=605.28, samples=2 00:17:37.001 lat (usec) : 750=0.01% 00:17:37.001 lat (msec) : 10=6.93%, 20=86.53%, 50=6.35%, 100=0.18% 00:17:37.001 cpu : usr=6.38%, sys=10.77%, ctx=400, majf=0, minf=1 00:17:37.001 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:37.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:37.001 issued rwts: total=4608,4820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:37.001 job2: (groupid=0, jobs=1): err= 0: pid=1902056: Mon Jul 15 09:50:53 2024 00:17:37.001 read: IOPS=3571, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:17:37.001 slat (usec): min=2, max=24821, avg=121.50, stdev=802.25 00:17:37.001 clat (usec): min=541, max=49013, avg=15946.92, stdev=4937.19 00:17:37.001 lat (usec): min=4792, max=49046, avg=16068.43, stdev=4973.45 00:17:37.001 clat percentiles (usec): 00:17:37.001 | 1.00th=[10290], 5.00th=[11207], 10.00th=[12125], 20.00th=[13566], 00:17:37.001 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14746], 60.00th=[15270], 00:17:37.001 | 70.00th=[16319], 80.00th=[16909], 90.00th=[19530], 95.00th=[24249], 00:17:37.001 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:17:37.001 | 99.99th=[49021] 00:17:37.001 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:17:37.001 slat (usec): min=3, max=22124, avg=131.33, stdev=966.21 00:17:37.001 clat (usec): min=4814, max=69275, avg=16453.99, stdev=9230.01 00:17:37.001 lat (usec): min=4818, max=69291, avg=16585.32, stdev=9316.92 00:17:37.001 clat percentiles (usec): 00:17:37.002 | 1.00th=[ 5669], 5.00th=[ 9503], 10.00th=[10814], 20.00th=[12911], 00:17:37.002 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13829], 60.00th=[14484], 00:17:37.002 | 70.00th=[16188], 80.00th=[17171], 90.00th=[20055], 95.00th=[43779], 00:17:37.002 | 99.00th=[54264], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:17:37.002 | 99.99th=[69731] 00:17:37.002 bw ( KiB/s): min=12288, max=19496, per=21.16%, avg=15892.00, stdev=5096.83, samples=2 00:17:37.002 iops : min= 3072, max= 4874, avg=3973.00, stdev=1274.21, samples=2 00:17:37.002 lat (usec) : 750=0.01% 00:17:37.002 lat (msec) : 10=3.37%, 20=87.63%, 50=7.74%, 100=1.25% 00:17:37.002 cpu : usr=3.59%, sys=5.48%, ctx=259, majf=0, minf=1 00:17:37.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:37.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:37.002 issued rwts: total=3589,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:37.002 job3: (groupid=0, jobs=1): err= 0: pid=1902057: Mon Jul 15 09:50:53 2024 00:17:37.002 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:17:37.002 slat (usec): min=3, max=15049, avg=123.72, stdev=783.68 00:17:37.002 clat (usec): min=5266, max=51916, avg=15441.27, stdev=3793.34 00:17:37.002 lat (usec): min=5288, max=51927, avg=15565.00, stdev=3882.56 00:17:37.002 clat percentiles (usec): 00:17:37.002 | 1.00th=[ 9896], 5.00th=[11207], 10.00th=[12518], 20.00th=[12911], 00:17:37.002 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14877], 60.00th=[15533], 00:17:37.002 | 70.00th=[15926], 80.00th=[17171], 90.00th=[19792], 95.00th=[21890], 00:17:37.002 | 99.00th=[24773], 99.50th=[30540], 99.90th=[52167], 99.95th=[52167], 00:17:37.002 | 99.99th=[52167] 00:17:37.002 write: IOPS=4515, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1006msec); 0 zone resets 00:17:37.002 slat (usec): min=3, max=13340, avg=96.28, stdev=608.48 00:17:37.002 clat (usec): min=829, max=51897, avg=14162.61, stdev=5925.84 00:17:37.002 lat (usec): min=877, max=51920, avg=14258.90, stdev=5939.47 00:17:37.002 clat percentiles (usec): 00:17:37.002 | 1.00th=[ 4424], 5.00th=[ 7701], 10.00th=[10028], 20.00th=[11994], 00:17:37.002 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:17:37.002 | 70.00th=[14091], 80.00th=[15139], 90.00th=[16450], 95.00th=[19792], 00:17:37.002 | 99.00th=[43779], 99.50th=[43779], 99.90th=[50594], 99.95th=[50594], 00:17:37.002 | 99.99th=[51643] 00:17:37.002 bw ( KiB/s): min=16384, max=18944, per=23.52%, avg=17664.00, stdev=1810.19, samples=2 00:17:37.002 iops : min= 4096, max= 4736, avg=4416.00, stdev=452.55, samples=2 00:17:37.002 lat (usec) : 1000=0.06% 00:17:37.002 lat (msec) : 4=0.08%, 10=6.00%, 20=87.22%, 50=6.34%, 100=0.30% 00:17:37.002 cpu : usr=6.07%, sys=9.75%, ctx=368, majf=0, minf=1 00:17:37.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:37.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:37.002 issued rwts: total=4096,4543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:37.002 00:17:37.002 Run status group 0 (all jobs): 00:17:37.002 READ: bw=67.6MiB/s (70.9MB/s), 13.9MiB/s-19.9MiB/s (14.6MB/s-20.9MB/s), io=68.0MiB (71.3MB), run=1004-1006msec 00:17:37.002 WRITE: bw=73.3MiB/s (76.9MB/s), 15.9MiB/s-21.1MiB/s (16.7MB/s-22.1MB/s), io=73.8MiB (77.4MB), run=1004-1006msec 00:17:37.002 00:17:37.002 Disk stats (read/write): 00:17:37.002 nvme0n1: ios=4291/4608, merge=0/0, ticks=41773/38601, in_queue=80374, util=96.09% 00:17:37.002 nvme0n2: ios=4109/4247, merge=0/0, ticks=18708/14775, in_queue=33483, util=86.59% 00:17:37.002 nvme0n3: ios=3072/3157, merge=0/0, ticks=20544/22765, in_queue=43309, util=89.02% 00:17:37.002 nvme0n4: ios=3522/3584, merge=0/0, ticks=37430/35231, in_queue=72661, util=88.30% 00:17:37.002 09:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:37.002 09:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1902190 00:17:37.002 09:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:37.002 09:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:37.002 [global] 00:17:37.002 thread=1 00:17:37.002 invalidate=1 00:17:37.002 rw=read 00:17:37.002 time_based=1 00:17:37.002 runtime=10 00:17:37.002 ioengine=libaio 00:17:37.002 direct=1 00:17:37.002 bs=4096 00:17:37.002 iodepth=1 00:17:37.002 norandommap=1 00:17:37.002 numjobs=1 00:17:37.002 00:17:37.002 [job0] 00:17:37.002 filename=/dev/nvme0n1 00:17:37.002 [job1] 00:17:37.002 filename=/dev/nvme0n2 00:17:37.002 [job2] 00:17:37.002 filename=/dev/nvme0n3 00:17:37.002 [job3] 00:17:37.002 filename=/dev/nvme0n4 00:17:37.002 Could not set queue depth (nvme0n1) 00:17:37.002 Could not set queue depth (nvme0n2) 00:17:37.002 Could not set queue depth (nvme0n3) 00:17:37.002 Could not set queue depth (nvme0n4) 00:17:37.002 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:37.002 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:37.002 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:37.002 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:37.002 fio-3.35 00:17:37.002 Starting 4 threads 00:17:40.279 09:50:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:40.279 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=21204992, buflen=4096 00:17:40.279 fio: pid=1902285, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:40.279 09:50:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:40.279 09:50:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:40.279 09:50:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:40.279 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=6258688, buflen=4096 00:17:40.279 fio: pid=1902284, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:40.537 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=38617088, buflen=4096 00:17:40.537 fio: pid=1902280, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:40.537 09:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:40.537 09:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:40.794 09:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:40.794 09:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:40.794 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1183744, buflen=4096 00:17:40.794 fio: pid=1902281, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:41.052 00:17:41.052 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1902280: Mon Jul 15 09:50:57 2024 00:17:41.052 read: IOPS=2716, BW=10.6MiB/s (11.1MB/s)(36.8MiB/3471msec) 00:17:41.052 slat (usec): min=4, max=15895, avg=15.79, stdev=244.66 00:17:41.052 clat (usec): min=259, max=41242, avg=347.24, stdev=424.77 00:17:41.052 lat (usec): min=270, max=41248, avg=362.22, stdev=485.28 00:17:41.052 clat percentiles (usec): 00:17:41.052 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:17:41.052 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:17:41.052 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 400], 95.00th=[ 441], 00:17:41.052 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 660], 99.95th=[ 807], 00:17:41.052 | 99.99th=[41157] 00:17:41.052 bw ( KiB/s): min= 9208, max=12056, per=61.65%, avg=10840.00, stdev=1016.66, samples=6 00:17:41.052 iops : min= 2302, max= 3014, avg=2710.00, stdev=254.17, samples=6 00:17:41.052 lat (usec) : 500=97.88%, 750=2.05%, 1000=0.04% 00:17:41.052 lat (msec) : 4=0.01%, 50=0.01% 00:17:41.052 cpu : usr=1.73%, sys=4.76%, ctx=9435, majf=0, minf=1 00:17:41.052 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:41.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.052 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.052 issued rwts: total=9429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.052 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:41.052 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1902281: Mon Jul 15 09:50:57 2024 00:17:41.052 read: IOPS=77, BW=309KiB/s (317kB/s)(1156KiB/3736msec) 00:17:41.052 slat (usec): min=4, max=19925, avg=135.77, stdev=1346.25 00:17:41.052 clat (usec): min=282, max=46189, avg=12706.78, stdev=18730.80 00:17:41.052 lat (usec): min=294, max=62036, avg=12842.97, stdev=18964.77 00:17:41.052 clat percentiles (usec): 00:17:41.052 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 330], 00:17:41.052 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 371], 60.00th=[ 392], 00:17:41.052 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:41.052 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:17:41.052 | 99.99th=[46400] 00:17:41.052 bw ( KiB/s): min= 96, max= 1624, per=1.84%, avg=323.14, stdev=573.77, samples=7 00:17:41.052 iops : min= 24, max= 406, avg=80.71, stdev=143.47, samples=7 00:17:41.052 lat (usec) : 500=68.62%, 750=0.34% 00:17:41.052 lat (msec) : 2=0.34%, 50=30.34% 00:17:41.052 cpu : usr=0.08%, sys=0.11%, ctx=293, majf=0, minf=1 00:17:41.052 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:41.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.052 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.052 issued rwts: total=290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.052 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:41.052 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1902284: Mon Jul 15 09:50:57 2024 00:17:41.052 read: IOPS=477, BW=1907KiB/s (1953kB/s)(6112KiB/3205msec) 00:17:41.052 slat (usec): min=5, max=15644, avg=32.09, stdev=487.60 00:17:41.052 clat (usec): min=293, max=42450, avg=2047.16, stdev=8103.98 00:17:41.052 lat (usec): min=307, max=42469, avg=2079.27, stdev=8116.23 00:17:41.052 clat percentiles (usec): 00:17:41.052 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 338], 00:17:41.052 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 392], 00:17:41.052 | 70.00th=[ 453], 80.00th=[ 482], 90.00th=[ 529], 95.00th=[ 570], 00:17:41.052 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:41.052 | 99.99th=[42206] 00:17:41.052 bw ( KiB/s): min= 88, max= 4760, per=8.31%, avg=1461.33, stdev=2145.46, samples=6 00:17:41.052 iops : min= 22, max= 1190, avg=365.33, stdev=536.37, samples=6 00:17:41.052 lat (usec) : 500=86.59%, 750=9.29%, 1000=0.07% 00:17:41.052 lat (msec) : 50=3.99% 00:17:41.052 cpu : usr=0.41%, sys=0.84%, ctx=1532, majf=0, minf=1 00:17:41.052 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:41.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.052 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.052 issued rwts: total=1529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.052 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:41.052 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1902285: Mon Jul 15 09:50:57 2024 00:17:41.052 read: IOPS=1786, BW=7143KiB/s (7315kB/s)(20.2MiB/2899msec) 00:17:41.052 slat (nsec): min=4309, max=64229, avg=15999.03, stdev=9790.53 00:17:41.052 clat (usec): min=270, max=41377, avg=538.20, stdev=2810.45 00:17:41.052 lat (usec): min=276, max=41410, avg=554.20, stdev=2810.66 00:17:41.052 clat percentiles (usec): 00:17:41.052 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 314], 00:17:41.052 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:17:41.052 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 383], 95.00th=[ 400], 00:17:41.052 | 99.00th=[ 457], 99.50th=[ 865], 99.90th=[41157], 99.95th=[41157], 00:17:41.052 | 99.99th=[41157] 00:17:41.052 bw ( KiB/s): min= 96, max=12048, per=37.27%, avg=6553.60, stdev=5928.89, samples=5 00:17:41.052 iops : min= 24, max= 3012, avg=1638.40, stdev=1482.22, samples=5 00:17:41.052 lat (usec) : 500=99.36%, 750=0.12%, 1000=0.02% 00:17:41.052 lat (msec) : 50=0.48% 00:17:41.052 cpu : usr=0.97%, sys=3.83%, ctx=5178, majf=0, minf=1 00:17:41.052 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:41.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.052 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.053 issued rwts: total=5178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:41.053 00:17:41.053 Run status group 0 (all jobs): 00:17:41.053 READ: bw=17.2MiB/s (18.0MB/s), 309KiB/s-10.6MiB/s (317kB/s-11.1MB/s), io=64.1MiB (67.3MB), run=2899-3736msec 00:17:41.053 00:17:41.053 Disk stats (read/write): 00:17:41.053 nvme0n1: ios=9211/0, merge=0/0, ticks=4136/0, in_queue=4136, util=98.45% 00:17:41.053 nvme0n2: ios=286/0, merge=0/0, ticks=3542/0, in_queue=3542, util=95.63% 00:17:41.053 nvme0n3: ios=1341/0, merge=0/0, ticks=3713/0, in_queue=3713, util=98.19% 00:17:41.053 nvme0n4: ios=5069/0, merge=0/0, ticks=2709/0, in_queue=2709, util=96.74% 00:17:41.053 09:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:41.053 09:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:41.310 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:41.310 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:41.568 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:41.568 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:41.826 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:41.826 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:42.082 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:42.082 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1902190 00:17:42.082 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:42.082 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:42.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.339 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:42.339 09:50:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:42.339 09:50:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:42.339 09:50:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.339 09:50:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:42.339 09:50:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.339 09:50:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:42.339 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:42.339 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:42.339 nvmf hotplug test: fio failed as expected 00:17:42.339 09:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.595 rmmod nvme_tcp 00:17:42.595 rmmod nvme_fabrics 00:17:42.595 rmmod nvme_keyring 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1900168 ']' 00:17:42.595 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1900168 00:17:42.596 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1900168 ']' 00:17:42.596 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1900168 00:17:42.596 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:42.596 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.596 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1900168 00:17:42.596 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:42.596 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:42.596 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1900168' 00:17:42.596 killing process with pid 1900168 00:17:42.596 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1900168 00:17:42.596 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1900168 00:17:42.853 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.853 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.853 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.853 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.853 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.853 09:50:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.853 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.853 09:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.754 09:51:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:45.012 00:17:45.012 real 0m23.300s 00:17:45.012 user 1m21.431s 00:17:45.012 sys 0m6.655s 00:17:45.012 09:51:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:45.012 09:51:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.012 ************************************ 00:17:45.012 END TEST nvmf_fio_target 00:17:45.012 ************************************ 00:17:45.012 09:51:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:45.012 09:51:01 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:45.012 09:51:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:45.012 09:51:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.012 09:51:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:45.012 ************************************ 00:17:45.012 START TEST nvmf_bdevio 00:17:45.012 ************************************ 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:45.012 * Looking for test storage... 00:17:45.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.012 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:45.013 09:51:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.912 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:46.913 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:46.913 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:46.913 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:46.913 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:17:46.913 00:17:46.913 --- 10.0.0.2 ping statistics --- 00:17:46.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.913 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:17:46.913 00:17:46.913 --- 10.0.0.1 ping statistics --- 00:17:46.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.913 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1905010 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1905010 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1905010 ']' 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.913 09:51:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:47.171 [2024-07-15 09:51:03.723986] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:47.171 [2024-07-15 09:51:03.724062] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.171 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.171 [2024-07-15 09:51:03.762062] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:47.171 [2024-07-15 09:51:03.789040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.171 [2024-07-15 09:51:03.875320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.171 [2024-07-15 09:51:03.875368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.171 [2024-07-15 09:51:03.875396] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.171 [2024-07-15 09:51:03.875408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.171 [2024-07-15 09:51:03.875417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.171 [2024-07-15 09:51:03.875554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:47.171 [2024-07-15 09:51:03.875616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:47.171 [2024-07-15 09:51:03.875646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:47.171 [2024-07-15 09:51:03.875648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:47.430 [2024-07-15 09:51:04.034801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:47.430 Malloc0 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:47.430 [2024-07-15 09:51:04.088474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.430 { 00:17:47.430 "params": { 00:17:47.430 "name": "Nvme$subsystem", 00:17:47.430 "trtype": "$TEST_TRANSPORT", 00:17:47.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.430 "adrfam": "ipv4", 00:17:47.430 "trsvcid": "$NVMF_PORT", 00:17:47.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.430 "hdgst": ${hdgst:-false}, 00:17:47.430 "ddgst": ${ddgst:-false} 00:17:47.430 }, 00:17:47.430 "method": "bdev_nvme_attach_controller" 00:17:47.430 } 00:17:47.430 EOF 00:17:47.430 )") 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:47.430 09:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.430 "params": { 00:17:47.430 "name": "Nvme1", 00:17:47.430 "trtype": "tcp", 00:17:47.430 "traddr": "10.0.0.2", 00:17:47.430 "adrfam": "ipv4", 00:17:47.430 "trsvcid": "4420", 00:17:47.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.430 "hdgst": false, 00:17:47.430 "ddgst": false 00:17:47.430 }, 00:17:47.430 "method": "bdev_nvme_attach_controller" 00:17:47.430 }' 00:17:47.430 [2024-07-15 09:51:04.133899] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:47.430 [2024-07-15 09:51:04.133976] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905043 ] 00:17:47.430 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.430 [2024-07-15 09:51:04.167225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:47.430 [2024-07-15 09:51:04.196913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:47.689 [2024-07-15 09:51:04.288313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.689 [2024-07-15 09:51:04.288365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.689 [2024-07-15 09:51:04.288369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.689 I/O targets: 00:17:47.689 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:47.689 00:17:47.689 00:17:47.689 CUnit - A unit testing framework for C - Version 2.1-3 00:17:47.689 http://cunit.sourceforge.net/ 00:17:47.689 00:17:47.689 00:17:47.689 Suite: bdevio tests on: Nvme1n1 00:17:47.947 Test: blockdev write read block ...passed 00:17:47.947 Test: blockdev write zeroes read block ...passed 00:17:47.947 Test: blockdev write zeroes read no split ...passed 00:17:47.947 Test: blockdev write zeroes read split ...passed 00:17:47.947 Test: blockdev write zeroes read split partial ...passed 00:17:47.947 Test: blockdev reset ...[2024-07-15 09:51:04.676739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:47.947 [2024-07-15 09:51:04.676844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488940 (9): Bad file descriptor 00:17:47.947 [2024-07-15 09:51:04.730426] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:47.947 passed 00:17:47.947 Test: blockdev write read 8 blocks ...passed 00:17:48.204 Test: blockdev write read size > 128k ...passed 00:17:48.204 Test: blockdev write read invalid size ...passed 00:17:48.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:48.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:48.204 Test: blockdev write read max offset ...passed 00:17:48.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:48.205 Test: blockdev writev readv 8 blocks ...passed 00:17:48.205 Test: blockdev writev readv 30 x 1block ...passed 00:17:48.205 Test: blockdev writev readv block ...passed 00:17:48.205 Test: blockdev writev readv size > 128k ...passed 00:17:48.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:48.205 Test: blockdev comparev and writev ...[2024-07-15 09:51:04.948905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.205 [2024-07-15 09:51:04.948953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.205 [2024-07-15 09:51:04.948989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.205 [2024-07-15 09:51:04.949017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:48.205 [2024-07-15 09:51:04.949488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.205 [2024-07-15 09:51:04.949516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:48.205 [2024-07-15 09:51:04.949551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.205 [2024-07-15 09:51:04.949578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:48.205 [2024-07-15 09:51:04.949993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.205 [2024-07-15 09:51:04.950019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:48.205 [2024-07-15 09:51:04.950055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.205 [2024-07-15 09:51:04.950082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:48.205 [2024-07-15 09:51:04.950497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.205 [2024-07-15 09:51:04.950523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:48.205 [2024-07-15 09:51:04.950556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.205 [2024-07-15 09:51:04.950582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:48.462 passed 00:17:48.462 Test: blockdev nvme passthru rw ...passed 00:17:48.462 Test: blockdev nvme passthru vendor specific ...[2024-07-15 09:51:05.033211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.462 [2024-07-15 09:51:05.033240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:48.462 [2024-07-15 09:51:05.033444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.462 [2024-07-15 09:51:05.033470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:48.462 [2024-07-15 09:51:05.033673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.462 [2024-07-15 09:51:05.033698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:48.462 [2024-07-15 09:51:05.033901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.463 [2024-07-15 09:51:05.033926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:48.463 passed 00:17:48.463 Test: blockdev nvme admin passthru ...passed 00:17:48.463 Test: blockdev copy ...passed 00:17:48.463 00:17:48.463 Run Summary: Type Total Ran Passed Failed Inactive 00:17:48.463 suites 1 1 n/a 0 0 00:17:48.463 tests 23 23 23 0 0 00:17:48.463 asserts 152 152 152 0 n/a 00:17:48.463 00:17:48.463 Elapsed time = 1.238 seconds 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:48.721 rmmod nvme_tcp 00:17:48.721 rmmod nvme_fabrics 00:17:48.721 rmmod nvme_keyring 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1905010 ']' 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1905010 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1905010 ']' 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1905010 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:48.721 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.722 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1905010 00:17:48.722 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:48.722 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:48.722 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1905010' 00:17:48.722 killing process with pid 1905010 00:17:48.722 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1905010 00:17:48.722 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1905010 00:17:48.981 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:48.981 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:48.981 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:48.981 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.981 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:48.981 09:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.981 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.981 09:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.890 09:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:50.890 00:17:50.890 real 0m6.052s 00:17:50.890 user 0m9.526s 00:17:50.890 sys 0m1.999s 00:17:50.890 09:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:50.890 09:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:50.890 ************************************ 00:17:50.890 END TEST nvmf_bdevio 00:17:50.890 ************************************ 00:17:50.890 09:51:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:50.890 09:51:07 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:50.890 09:51:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:50.890 09:51:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.890 09:51:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.204 ************************************ 00:17:51.204 START TEST nvmf_auth_target 00:17:51.204 ************************************ 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:51.204 * Looking for test storage... 00:17:51.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:51.204 09:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:53.105 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:53.105 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:53.105 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:53.105 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:53.105 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:53.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:17:53.106 00:17:53.106 --- 10.0.0.2 ping statistics --- 00:17:53.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.106 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:17:53.106 00:17:53.106 --- 10.0.0.1 ping statistics --- 00:17:53.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.106 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1907609 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1907609 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1907609 ']' 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.106 09:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.364 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.364 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:53.364 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.364 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.364 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.364 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.364 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1907644 00:17:53.364 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:53.364 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=05a9df5196bccd00a4ef2607f1fe442ac6dee28b6cc87877 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uaW 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 05a9df5196bccd00a4ef2607f1fe442ac6dee28b6cc87877 0 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 05a9df5196bccd00a4ef2607f1fe442ac6dee28b6cc87877 0 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=05a9df5196bccd00a4ef2607f1fe442ac6dee28b6cc87877 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:53.365 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uaW 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uaW 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.uaW 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:53.623 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5c0f5bd63b932c39c43202b7f36a2540876ee2ef7e81640072bc664a406cbf3b 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.UZ6 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5c0f5bd63b932c39c43202b7f36a2540876ee2ef7e81640072bc664a406cbf3b 3 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5c0f5bd63b932c39c43202b7f36a2540876ee2ef7e81640072bc664a406cbf3b 3 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5c0f5bd63b932c39c43202b7f36a2540876ee2ef7e81640072bc664a406cbf3b 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.UZ6 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.UZ6 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.UZ6 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=26914470b84b330d9448580ba40de7a6 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.qQi 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 26914470b84b330d9448580ba40de7a6 1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 26914470b84b330d9448580ba40de7a6 1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=26914470b84b330d9448580ba40de7a6 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.qQi 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.qQi 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.qQi 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d6a5d63c83b8290f6fdee6d56aeb17fa6033b7d9bae8b2de 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iLl 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d6a5d63c83b8290f6fdee6d56aeb17fa6033b7d9bae8b2de 2 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d6a5d63c83b8290f6fdee6d56aeb17fa6033b7d9bae8b2de 2 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d6a5d63c83b8290f6fdee6d56aeb17fa6033b7d9bae8b2de 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iLl 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iLl 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.iLl 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=31e3d03a93fa805c90a6d0f6ee0969727816de94826b10d0 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qhe 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 31e3d03a93fa805c90a6d0f6ee0969727816de94826b10d0 2 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 31e3d03a93fa805c90a6d0f6ee0969727816de94826b10d0 2 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=31e3d03a93fa805c90a6d0f6ee0969727816de94826b10d0 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qhe 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qhe 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.qhe 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3fb7b9e4109e7e2986d24d9493207e6f 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.den 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3fb7b9e4109e7e2986d24d9493207e6f 1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3fb7b9e4109e7e2986d24d9493207e6f 1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3fb7b9e4109e7e2986d24d9493207e6f 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:53.624 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.den 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.den 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.den 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=84adfb0d25915c97a7426b488d0e5a69c1a813bdd85fda9f86b8c52e70be72fd 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.H29 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 84adfb0d25915c97a7426b488d0e5a69c1a813bdd85fda9f86b8c52e70be72fd 3 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 84adfb0d25915c97a7426b488d0e5a69c1a813bdd85fda9f86b8c52e70be72fd 3 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=84adfb0d25915c97a7426b488d0e5a69c1a813bdd85fda9f86b8c52e70be72fd 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.H29 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.H29 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.H29 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1907609 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1907609 ']' 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.883 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.141 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.141 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:54.141 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1907644 /var/tmp/host.sock 00:17:54.141 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1907644 ']' 00:17:54.141 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:54.141 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.141 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:54.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:54.141 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.141 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.398 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.398 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:54.398 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:54.398 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.398 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.398 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.398 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:54.398 09:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uaW 00:17:54.398 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.398 09:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.398 09:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.398 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uaW 00:17:54.398 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uaW 00:17:54.655 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.UZ6 ]] 00:17:54.655 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UZ6 00:17:54.655 09:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.655 09:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.655 09:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.655 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UZ6 00:17:54.655 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UZ6 00:17:54.912 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:54.912 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.qQi 00:17:54.912 09:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.912 09:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.912 09:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.912 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.qQi 00:17:54.912 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.qQi 00:17:55.170 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.iLl ]] 00:17:55.170 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iLl 00:17:55.170 09:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.170 09:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.170 09:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.170 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iLl 00:17:55.170 09:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iLl 00:17:55.429 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:55.429 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qhe 00:17:55.429 09:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.429 09:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.429 09:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.429 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.qhe 00:17:55.429 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.qhe 00:17:55.686 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.den ]] 00:17:55.687 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.den 00:17:55.687 09:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.687 09:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.687 09:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.687 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.den 00:17:55.687 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.den 00:17:55.944 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:55.944 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.H29 00:17:55.944 09:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.944 09:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.944 09:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.944 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.H29 00:17:55.944 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.H29 00:17:56.202 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:56.202 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:56.202 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.202 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.202 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.202 09:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.460 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.717 00:17:56.717 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.717 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.717 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.974 { 00:17:56.974 "cntlid": 1, 00:17:56.974 "qid": 0, 00:17:56.974 "state": "enabled", 00:17:56.974 "thread": "nvmf_tgt_poll_group_000", 00:17:56.974 "listen_address": { 00:17:56.974 "trtype": "TCP", 00:17:56.974 "adrfam": "IPv4", 00:17:56.974 "traddr": "10.0.0.2", 00:17:56.974 "trsvcid": "4420" 00:17:56.974 }, 00:17:56.974 "peer_address": { 00:17:56.974 "trtype": "TCP", 00:17:56.974 "adrfam": "IPv4", 00:17:56.974 "traddr": "10.0.0.1", 00:17:56.974 "trsvcid": "59266" 00:17:56.974 }, 00:17:56.974 "auth": { 00:17:56.974 "state": "completed", 00:17:56.974 "digest": "sha256", 00:17:56.974 "dhgroup": "null" 00:17:56.974 } 00:17:56.974 } 00:17:56.974 ]' 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:56.974 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.232 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.232 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.232 09:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.490 09:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:17:58.422 09:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.423 09:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.423 09:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.423 09:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.423 09:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.423 09:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.423 09:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.423 09:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.681 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.938 00:17:58.938 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.938 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.939 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.196 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.196 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.196 09:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.196 09:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.197 09:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.197 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.197 { 00:17:59.197 "cntlid": 3, 00:17:59.197 "qid": 0, 00:17:59.197 "state": "enabled", 00:17:59.197 "thread": "nvmf_tgt_poll_group_000", 00:17:59.197 "listen_address": { 00:17:59.197 "trtype": "TCP", 00:17:59.197 "adrfam": "IPv4", 00:17:59.197 "traddr": "10.0.0.2", 00:17:59.197 "trsvcid": "4420" 00:17:59.197 }, 00:17:59.197 "peer_address": { 00:17:59.197 "trtype": "TCP", 00:17:59.197 "adrfam": "IPv4", 00:17:59.197 "traddr": "10.0.0.1", 00:17:59.197 "trsvcid": "59304" 00:17:59.197 }, 00:17:59.197 "auth": { 00:17:59.197 "state": "completed", 00:17:59.197 "digest": "sha256", 00:17:59.197 "dhgroup": "null" 00:17:59.197 } 00:17:59.197 } 00:17:59.197 ]' 00:17:59.197 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.197 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.197 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.197 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:59.197 09:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.454 09:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.454 09:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.454 09:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.710 09:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:18:00.639 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.639 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.639 09:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.639 09:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.639 09:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.639 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.639 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:00.639 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.896 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.152 00:18:01.410 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.410 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.410 09:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.667 { 00:18:01.667 "cntlid": 5, 00:18:01.667 "qid": 0, 00:18:01.667 "state": "enabled", 00:18:01.667 "thread": "nvmf_tgt_poll_group_000", 00:18:01.667 "listen_address": { 00:18:01.667 "trtype": "TCP", 00:18:01.667 "adrfam": "IPv4", 00:18:01.667 "traddr": "10.0.0.2", 00:18:01.667 "trsvcid": "4420" 00:18:01.667 }, 00:18:01.667 "peer_address": { 00:18:01.667 "trtype": "TCP", 00:18:01.667 "adrfam": "IPv4", 00:18:01.667 "traddr": "10.0.0.1", 00:18:01.667 "trsvcid": "59328" 00:18:01.667 }, 00:18:01.667 "auth": { 00:18:01.667 "state": "completed", 00:18:01.667 "digest": "sha256", 00:18:01.667 "dhgroup": "null" 00:18:01.667 } 00:18:01.667 } 00:18:01.667 ]' 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.667 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.925 09:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:18:02.856 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.856 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.856 09:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.856 09:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.856 09:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.856 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.856 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:02.856 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.421 09:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.678 00:18:03.678 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.678 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.678 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.936 { 00:18:03.936 "cntlid": 7, 00:18:03.936 "qid": 0, 00:18:03.936 "state": "enabled", 00:18:03.936 "thread": "nvmf_tgt_poll_group_000", 00:18:03.936 "listen_address": { 00:18:03.936 "trtype": "TCP", 00:18:03.936 "adrfam": "IPv4", 00:18:03.936 "traddr": "10.0.0.2", 00:18:03.936 "trsvcid": "4420" 00:18:03.936 }, 00:18:03.936 "peer_address": { 00:18:03.936 "trtype": "TCP", 00:18:03.936 "adrfam": "IPv4", 00:18:03.936 "traddr": "10.0.0.1", 00:18:03.936 "trsvcid": "59356" 00:18:03.936 }, 00:18:03.936 "auth": { 00:18:03.936 "state": "completed", 00:18:03.936 "digest": "sha256", 00:18:03.936 "dhgroup": "null" 00:18:03.936 } 00:18:03.936 } 00:18:03.936 ]' 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.936 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.193 09:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:18:05.214 09:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.214 09:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.214 09:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.214 09:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.214 09:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.214 09:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.214 09:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.214 09:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.214 09:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.471 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:05.471 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.471 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.471 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.471 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.471 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.471 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.471 09:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.471 09:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.471 09:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.472 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.472 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.037 00:18:06.037 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.037 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.037 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.037 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.037 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.037 09:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.037 09:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.037 09:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.037 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.037 { 00:18:06.037 "cntlid": 9, 00:18:06.037 "qid": 0, 00:18:06.037 "state": "enabled", 00:18:06.037 "thread": "nvmf_tgt_poll_group_000", 00:18:06.037 "listen_address": { 00:18:06.037 "trtype": "TCP", 00:18:06.037 "adrfam": "IPv4", 00:18:06.037 "traddr": "10.0.0.2", 00:18:06.037 "trsvcid": "4420" 00:18:06.037 }, 00:18:06.037 "peer_address": { 00:18:06.037 "trtype": "TCP", 00:18:06.037 "adrfam": "IPv4", 00:18:06.037 "traddr": "10.0.0.1", 00:18:06.037 "trsvcid": "56364" 00:18:06.037 }, 00:18:06.037 "auth": { 00:18:06.037 "state": "completed", 00:18:06.037 "digest": "sha256", 00:18:06.037 "dhgroup": "ffdhe2048" 00:18:06.037 } 00:18:06.037 } 00:18:06.037 ]' 00:18:06.037 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.295 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.295 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.295 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.295 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.295 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.295 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.295 09:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.553 09:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:18:07.487 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.487 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:07.487 09:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.487 09:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.487 09:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.487 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.487 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:07.487 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.746 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.004 00:18:08.004 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.004 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.004 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.262 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.262 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.262 09:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.262 09:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.262 09:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.262 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.262 { 00:18:08.262 "cntlid": 11, 00:18:08.262 "qid": 0, 00:18:08.262 "state": "enabled", 00:18:08.262 "thread": "nvmf_tgt_poll_group_000", 00:18:08.262 "listen_address": { 00:18:08.262 "trtype": "TCP", 00:18:08.262 "adrfam": "IPv4", 00:18:08.262 "traddr": "10.0.0.2", 00:18:08.262 "trsvcid": "4420" 00:18:08.262 }, 00:18:08.262 "peer_address": { 00:18:08.262 "trtype": "TCP", 00:18:08.262 "adrfam": "IPv4", 00:18:08.262 "traddr": "10.0.0.1", 00:18:08.262 "trsvcid": "56406" 00:18:08.262 }, 00:18:08.262 "auth": { 00:18:08.262 "state": "completed", 00:18:08.262 "digest": "sha256", 00:18:08.262 "dhgroup": "ffdhe2048" 00:18:08.262 } 00:18:08.262 } 00:18:08.262 ]' 00:18:08.262 09:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.262 09:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.262 09:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.262 09:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.262 09:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.520 09:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.520 09:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.520 09:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.778 09:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:18:09.712 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.712 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.712 09:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.712 09:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.712 09:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.712 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.712 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.712 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.970 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.228 00:18:10.228 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.228 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.228 09:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.486 { 00:18:10.486 "cntlid": 13, 00:18:10.486 "qid": 0, 00:18:10.486 "state": "enabled", 00:18:10.486 "thread": "nvmf_tgt_poll_group_000", 00:18:10.486 "listen_address": { 00:18:10.486 "trtype": "TCP", 00:18:10.486 "adrfam": "IPv4", 00:18:10.486 "traddr": "10.0.0.2", 00:18:10.486 "trsvcid": "4420" 00:18:10.486 }, 00:18:10.486 "peer_address": { 00:18:10.486 "trtype": "TCP", 00:18:10.486 "adrfam": "IPv4", 00:18:10.486 "traddr": "10.0.0.1", 00:18:10.486 "trsvcid": "56418" 00:18:10.486 }, 00:18:10.486 "auth": { 00:18:10.486 "state": "completed", 00:18:10.486 "digest": "sha256", 00:18:10.486 "dhgroup": "ffdhe2048" 00:18:10.486 } 00:18:10.486 } 00:18:10.486 ]' 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:10.486 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.744 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.744 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.744 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.001 09:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:18:11.933 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.933 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.933 09:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.933 09:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.933 09:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.933 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.933 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:11.933 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.190 09:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.446 00:18:12.446 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.446 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.446 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.703 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.703 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.703 09:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.703 09:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.703 09:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.703 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.703 { 00:18:12.703 "cntlid": 15, 00:18:12.703 "qid": 0, 00:18:12.703 "state": "enabled", 00:18:12.703 "thread": "nvmf_tgt_poll_group_000", 00:18:12.703 "listen_address": { 00:18:12.703 "trtype": "TCP", 00:18:12.703 "adrfam": "IPv4", 00:18:12.703 "traddr": "10.0.0.2", 00:18:12.703 "trsvcid": "4420" 00:18:12.703 }, 00:18:12.703 "peer_address": { 00:18:12.703 "trtype": "TCP", 00:18:12.703 "adrfam": "IPv4", 00:18:12.703 "traddr": "10.0.0.1", 00:18:12.703 "trsvcid": "56436" 00:18:12.703 }, 00:18:12.703 "auth": { 00:18:12.703 "state": "completed", 00:18:12.703 "digest": "sha256", 00:18:12.703 "dhgroup": "ffdhe2048" 00:18:12.703 } 00:18:12.703 } 00:18:12.703 ]' 00:18:12.703 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.961 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.961 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.961 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.961 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.961 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.961 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.961 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.218 09:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:18:14.151 09:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.151 09:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.151 09:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.151 09:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.151 09:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.151 09:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.151 09:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.151 09:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:14.151 09:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.409 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.667 00:18:14.667 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.667 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.667 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.924 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.924 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.924 09:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.924 09:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.924 09:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.924 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.924 { 00:18:14.924 "cntlid": 17, 00:18:14.924 "qid": 0, 00:18:14.924 "state": "enabled", 00:18:14.924 "thread": "nvmf_tgt_poll_group_000", 00:18:14.924 "listen_address": { 00:18:14.924 "trtype": "TCP", 00:18:14.925 "adrfam": "IPv4", 00:18:14.925 "traddr": "10.0.0.2", 00:18:14.925 "trsvcid": "4420" 00:18:14.925 }, 00:18:14.925 "peer_address": { 00:18:14.925 "trtype": "TCP", 00:18:14.925 "adrfam": "IPv4", 00:18:14.925 "traddr": "10.0.0.1", 00:18:14.925 "trsvcid": "53656" 00:18:14.925 }, 00:18:14.925 "auth": { 00:18:14.925 "state": "completed", 00:18:14.925 "digest": "sha256", 00:18:14.925 "dhgroup": "ffdhe3072" 00:18:14.925 } 00:18:14.925 } 00:18:14.925 ]' 00:18:14.925 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.925 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.925 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.183 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.183 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.183 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.183 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.183 09:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.441 09:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:18:16.374 09:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.374 09:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.374 09:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.374 09:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.374 09:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.374 09:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.374 09:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:16.374 09:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:16.632 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:16.632 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.632 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.633 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:16.633 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.633 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.633 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.633 09:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.633 09:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.633 09:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.633 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.633 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.890 00:18:16.890 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.890 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.890 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.146 { 00:18:17.146 "cntlid": 19, 00:18:17.146 "qid": 0, 00:18:17.146 "state": "enabled", 00:18:17.146 "thread": "nvmf_tgt_poll_group_000", 00:18:17.146 "listen_address": { 00:18:17.146 "trtype": "TCP", 00:18:17.146 "adrfam": "IPv4", 00:18:17.146 "traddr": "10.0.0.2", 00:18:17.146 "trsvcid": "4420" 00:18:17.146 }, 00:18:17.146 "peer_address": { 00:18:17.146 "trtype": "TCP", 00:18:17.146 "adrfam": "IPv4", 00:18:17.146 "traddr": "10.0.0.1", 00:18:17.146 "trsvcid": "53672" 00:18:17.146 }, 00:18:17.146 "auth": { 00:18:17.146 "state": "completed", 00:18:17.146 "digest": "sha256", 00:18:17.146 "dhgroup": "ffdhe3072" 00:18:17.146 } 00:18:17.146 } 00:18:17.146 ]' 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.146 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.403 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.403 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.403 09:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.660 09:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:18:18.623 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.623 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.623 09:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.623 09:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.623 09:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.623 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.623 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.623 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.884 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.142 00:18:19.142 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.142 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.142 09:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.399 { 00:18:19.399 "cntlid": 21, 00:18:19.399 "qid": 0, 00:18:19.399 "state": "enabled", 00:18:19.399 "thread": "nvmf_tgt_poll_group_000", 00:18:19.399 "listen_address": { 00:18:19.399 "trtype": "TCP", 00:18:19.399 "adrfam": "IPv4", 00:18:19.399 "traddr": "10.0.0.2", 00:18:19.399 "trsvcid": "4420" 00:18:19.399 }, 00:18:19.399 "peer_address": { 00:18:19.399 "trtype": "TCP", 00:18:19.399 "adrfam": "IPv4", 00:18:19.399 "traddr": "10.0.0.1", 00:18:19.399 "trsvcid": "53706" 00:18:19.399 }, 00:18:19.399 "auth": { 00:18:19.399 "state": "completed", 00:18:19.399 "digest": "sha256", 00:18:19.399 "dhgroup": "ffdhe3072" 00:18:19.399 } 00:18:19.399 } 00:18:19.399 ]' 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.399 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.656 09:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.042 09:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.300 00:18:21.300 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.300 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.300 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.558 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.558 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.558 09:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.558 09:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.558 09:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.558 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.558 { 00:18:21.558 "cntlid": 23, 00:18:21.558 "qid": 0, 00:18:21.558 "state": "enabled", 00:18:21.558 "thread": "nvmf_tgt_poll_group_000", 00:18:21.558 "listen_address": { 00:18:21.558 "trtype": "TCP", 00:18:21.558 "adrfam": "IPv4", 00:18:21.558 "traddr": "10.0.0.2", 00:18:21.558 "trsvcid": "4420" 00:18:21.558 }, 00:18:21.558 "peer_address": { 00:18:21.558 "trtype": "TCP", 00:18:21.558 "adrfam": "IPv4", 00:18:21.558 "traddr": "10.0.0.1", 00:18:21.558 "trsvcid": "53746" 00:18:21.558 }, 00:18:21.558 "auth": { 00:18:21.558 "state": "completed", 00:18:21.558 "digest": "sha256", 00:18:21.558 "dhgroup": "ffdhe3072" 00:18:21.558 } 00:18:21.558 } 00:18:21.558 ]' 00:18:21.558 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.558 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.558 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.816 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:21.816 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.816 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.816 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.816 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.074 09:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:18:23.008 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.008 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.008 09:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.008 09:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.008 09:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.008 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.008 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.008 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.008 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.267 09:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.524 00:18:23.525 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.525 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.525 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.782 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.782 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.782 09:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.782 09:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.782 09:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.782 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.782 { 00:18:23.782 "cntlid": 25, 00:18:23.782 "qid": 0, 00:18:23.782 "state": "enabled", 00:18:23.782 "thread": "nvmf_tgt_poll_group_000", 00:18:23.782 "listen_address": { 00:18:23.782 "trtype": "TCP", 00:18:23.782 "adrfam": "IPv4", 00:18:23.782 "traddr": "10.0.0.2", 00:18:23.782 "trsvcid": "4420" 00:18:23.782 }, 00:18:23.782 "peer_address": { 00:18:23.782 "trtype": "TCP", 00:18:23.782 "adrfam": "IPv4", 00:18:23.782 "traddr": "10.0.0.1", 00:18:23.782 "trsvcid": "53772" 00:18:23.782 }, 00:18:23.782 "auth": { 00:18:23.782 "state": "completed", 00:18:23.782 "digest": "sha256", 00:18:23.782 "dhgroup": "ffdhe4096" 00:18:23.782 } 00:18:23.782 } 00:18:23.782 ]' 00:18:23.782 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.782 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.782 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.040 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.040 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.040 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.040 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.040 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.298 09:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:18:25.231 09:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.231 09:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.231 09:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.231 09:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.231 09:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.231 09:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.231 09:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.231 09:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.488 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.746 00:18:25.746 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.746 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.746 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.004 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.004 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.004 09:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.004 09:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.004 09:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.004 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.004 { 00:18:26.004 "cntlid": 27, 00:18:26.004 "qid": 0, 00:18:26.004 "state": "enabled", 00:18:26.004 "thread": "nvmf_tgt_poll_group_000", 00:18:26.004 "listen_address": { 00:18:26.004 "trtype": "TCP", 00:18:26.004 "adrfam": "IPv4", 00:18:26.004 "traddr": "10.0.0.2", 00:18:26.004 "trsvcid": "4420" 00:18:26.004 }, 00:18:26.004 "peer_address": { 00:18:26.004 "trtype": "TCP", 00:18:26.004 "adrfam": "IPv4", 00:18:26.004 "traddr": "10.0.0.1", 00:18:26.004 "trsvcid": "35456" 00:18:26.004 }, 00:18:26.004 "auth": { 00:18:26.004 "state": "completed", 00:18:26.004 "digest": "sha256", 00:18:26.004 "dhgroup": "ffdhe4096" 00:18:26.004 } 00:18:26.004 } 00:18:26.004 ]' 00:18:26.004 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.262 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.262 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.262 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:26.262 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.262 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.262 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.262 09:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.519 09:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:18:27.448 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.448 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:27.448 09:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.448 09:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.448 09:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.448 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.448 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.448 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.707 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.292 00:18:28.292 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.292 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.292 09:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.292 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.292 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.292 09:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.292 09:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.292 09:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.293 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.293 { 00:18:28.293 "cntlid": 29, 00:18:28.293 "qid": 0, 00:18:28.293 "state": "enabled", 00:18:28.293 "thread": "nvmf_tgt_poll_group_000", 00:18:28.293 "listen_address": { 00:18:28.293 "trtype": "TCP", 00:18:28.293 "adrfam": "IPv4", 00:18:28.293 "traddr": "10.0.0.2", 00:18:28.293 "trsvcid": "4420" 00:18:28.293 }, 00:18:28.293 "peer_address": { 00:18:28.293 "trtype": "TCP", 00:18:28.293 "adrfam": "IPv4", 00:18:28.293 "traddr": "10.0.0.1", 00:18:28.293 "trsvcid": "35496" 00:18:28.293 }, 00:18:28.293 "auth": { 00:18:28.293 "state": "completed", 00:18:28.293 "digest": "sha256", 00:18:28.293 "dhgroup": "ffdhe4096" 00:18:28.293 } 00:18:28.293 } 00:18:28.293 ]' 00:18:28.293 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.550 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.550 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.550 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:28.550 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.550 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.550 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.550 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.807 09:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:18:29.740 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.740 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.740 09:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.740 09:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.740 09:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.740 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.740 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.740 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.998 09:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.256 00:18:30.256 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.256 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.256 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.513 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.513 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.513 09:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.513 09:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.513 09:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.513 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.513 { 00:18:30.513 "cntlid": 31, 00:18:30.513 "qid": 0, 00:18:30.513 "state": "enabled", 00:18:30.513 "thread": "nvmf_tgt_poll_group_000", 00:18:30.513 "listen_address": { 00:18:30.513 "trtype": "TCP", 00:18:30.513 "adrfam": "IPv4", 00:18:30.513 "traddr": "10.0.0.2", 00:18:30.513 "trsvcid": "4420" 00:18:30.513 }, 00:18:30.513 "peer_address": { 00:18:30.513 "trtype": "TCP", 00:18:30.513 "adrfam": "IPv4", 00:18:30.513 "traddr": "10.0.0.1", 00:18:30.513 "trsvcid": "35534" 00:18:30.513 }, 00:18:30.513 "auth": { 00:18:30.513 "state": "completed", 00:18:30.513 "digest": "sha256", 00:18:30.513 "dhgroup": "ffdhe4096" 00:18:30.513 } 00:18:30.513 } 00:18:30.513 ]' 00:18:30.513 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.772 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.772 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.772 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.772 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.773 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.773 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.773 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.031 09:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:18:31.963 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.963 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.963 09:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.963 09:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.963 09:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.963 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.963 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.963 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.963 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.221 09:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.795 00:18:32.795 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.795 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.795 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.053 { 00:18:33.053 "cntlid": 33, 00:18:33.053 "qid": 0, 00:18:33.053 "state": "enabled", 00:18:33.053 "thread": "nvmf_tgt_poll_group_000", 00:18:33.053 "listen_address": { 00:18:33.053 "trtype": "TCP", 00:18:33.053 "adrfam": "IPv4", 00:18:33.053 "traddr": "10.0.0.2", 00:18:33.053 "trsvcid": "4420" 00:18:33.053 }, 00:18:33.053 "peer_address": { 00:18:33.053 "trtype": "TCP", 00:18:33.053 "adrfam": "IPv4", 00:18:33.053 "traddr": "10.0.0.1", 00:18:33.053 "trsvcid": "35574" 00:18:33.053 }, 00:18:33.053 "auth": { 00:18:33.053 "state": "completed", 00:18:33.053 "digest": "sha256", 00:18:33.053 "dhgroup": "ffdhe6144" 00:18:33.053 } 00:18:33.053 } 00:18:33.053 ]' 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.053 09:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.311 09:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:18:34.244 09:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.244 09:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:34.244 09:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.244 09:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.244 09:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.244 09:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.244 09:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.244 09:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.502 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.067 00:18:35.067 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.067 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.067 09:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.325 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.325 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.325 09:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.325 09:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.325 09:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.325 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.325 { 00:18:35.325 "cntlid": 35, 00:18:35.325 "qid": 0, 00:18:35.325 "state": "enabled", 00:18:35.325 "thread": "nvmf_tgt_poll_group_000", 00:18:35.325 "listen_address": { 00:18:35.325 "trtype": "TCP", 00:18:35.325 "adrfam": "IPv4", 00:18:35.325 "traddr": "10.0.0.2", 00:18:35.325 "trsvcid": "4420" 00:18:35.325 }, 00:18:35.325 "peer_address": { 00:18:35.325 "trtype": "TCP", 00:18:35.325 "adrfam": "IPv4", 00:18:35.325 "traddr": "10.0.0.1", 00:18:35.325 "trsvcid": "39726" 00:18:35.325 }, 00:18:35.325 "auth": { 00:18:35.325 "state": "completed", 00:18:35.325 "digest": "sha256", 00:18:35.325 "dhgroup": "ffdhe6144" 00:18:35.325 } 00:18:35.325 } 00:18:35.325 ]' 00:18:35.325 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.325 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.325 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.583 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:35.583 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.583 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.583 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.583 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.841 09:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:18:36.773 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.773 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.773 09:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.773 09:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.774 09:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.774 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.774 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.774 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.031 09:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.597 00:18:37.597 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.597 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.597 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.856 { 00:18:37.856 "cntlid": 37, 00:18:37.856 "qid": 0, 00:18:37.856 "state": "enabled", 00:18:37.856 "thread": "nvmf_tgt_poll_group_000", 00:18:37.856 "listen_address": { 00:18:37.856 "trtype": "TCP", 00:18:37.856 "adrfam": "IPv4", 00:18:37.856 "traddr": "10.0.0.2", 00:18:37.856 "trsvcid": "4420" 00:18:37.856 }, 00:18:37.856 "peer_address": { 00:18:37.856 "trtype": "TCP", 00:18:37.856 "adrfam": "IPv4", 00:18:37.856 "traddr": "10.0.0.1", 00:18:37.856 "trsvcid": "39754" 00:18:37.856 }, 00:18:37.856 "auth": { 00:18:37.856 "state": "completed", 00:18:37.856 "digest": "sha256", 00:18:37.856 "dhgroup": "ffdhe6144" 00:18:37.856 } 00:18:37.856 } 00:18:37.856 ]' 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.856 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.114 09:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:18:39.487 09:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.487 09:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.487 09:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.487 09:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.487 09:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.487 09:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.487 09:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:39.487 09:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.487 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.052 00:18:40.052 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.052 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.052 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.309 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.309 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.309 09:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.309 09:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.309 09:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.309 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.309 { 00:18:40.309 "cntlid": 39, 00:18:40.309 "qid": 0, 00:18:40.310 "state": "enabled", 00:18:40.310 "thread": "nvmf_tgt_poll_group_000", 00:18:40.310 "listen_address": { 00:18:40.310 "trtype": "TCP", 00:18:40.310 "adrfam": "IPv4", 00:18:40.310 "traddr": "10.0.0.2", 00:18:40.310 "trsvcid": "4420" 00:18:40.310 }, 00:18:40.310 "peer_address": { 00:18:40.310 "trtype": "TCP", 00:18:40.310 "adrfam": "IPv4", 00:18:40.310 "traddr": "10.0.0.1", 00:18:40.310 "trsvcid": "39778" 00:18:40.310 }, 00:18:40.310 "auth": { 00:18:40.310 "state": "completed", 00:18:40.310 "digest": "sha256", 00:18:40.310 "dhgroup": "ffdhe6144" 00:18:40.310 } 00:18:40.310 } 00:18:40.310 ]' 00:18:40.310 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.310 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.310 09:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.310 09:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.310 09:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.310 09:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.310 09:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.310 09:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.567 09:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:18:41.499 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.499 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.499 09:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.499 09:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.499 09:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.499 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.499 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.499 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.499 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.757 09:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.690 00:18:42.691 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.691 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.691 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.948 { 00:18:42.948 "cntlid": 41, 00:18:42.948 "qid": 0, 00:18:42.948 "state": "enabled", 00:18:42.948 "thread": "nvmf_tgt_poll_group_000", 00:18:42.948 "listen_address": { 00:18:42.948 "trtype": "TCP", 00:18:42.948 "adrfam": "IPv4", 00:18:42.948 "traddr": "10.0.0.2", 00:18:42.948 "trsvcid": "4420" 00:18:42.948 }, 00:18:42.948 "peer_address": { 00:18:42.948 "trtype": "TCP", 00:18:42.948 "adrfam": "IPv4", 00:18:42.948 "traddr": "10.0.0.1", 00:18:42.948 "trsvcid": "39808" 00:18:42.948 }, 00:18:42.948 "auth": { 00:18:42.948 "state": "completed", 00:18:42.948 "digest": "sha256", 00:18:42.948 "dhgroup": "ffdhe8192" 00:18:42.948 } 00:18:42.948 } 00:18:42.948 ]' 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.948 09:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.513 09:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:18:44.444 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.444 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.444 09:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.444 09:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.444 09:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.444 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.444 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:44.444 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.701 09:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.633 00:18:45.633 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.633 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.633 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.633 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.633 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.633 09:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.633 09:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.633 09:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.633 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.633 { 00:18:45.633 "cntlid": 43, 00:18:45.633 "qid": 0, 00:18:45.634 "state": "enabled", 00:18:45.634 "thread": "nvmf_tgt_poll_group_000", 00:18:45.634 "listen_address": { 00:18:45.634 "trtype": "TCP", 00:18:45.634 "adrfam": "IPv4", 00:18:45.634 "traddr": "10.0.0.2", 00:18:45.634 "trsvcid": "4420" 00:18:45.634 }, 00:18:45.634 "peer_address": { 00:18:45.634 "trtype": "TCP", 00:18:45.634 "adrfam": "IPv4", 00:18:45.634 "traddr": "10.0.0.1", 00:18:45.634 "trsvcid": "59866" 00:18:45.634 }, 00:18:45.634 "auth": { 00:18:45.634 "state": "completed", 00:18:45.634 "digest": "sha256", 00:18:45.634 "dhgroup": "ffdhe8192" 00:18:45.634 } 00:18:45.634 } 00:18:45.634 ]' 00:18:45.634 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.890 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.890 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.890 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.890 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.890 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.890 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.890 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.147 09:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:18:47.108 09:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.108 09:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.108 09:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.108 09:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.108 09:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.108 09:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.108 09:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.108 09:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.365 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.298 00:18:48.298 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.298 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.298 09:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.556 { 00:18:48.556 "cntlid": 45, 00:18:48.556 "qid": 0, 00:18:48.556 "state": "enabled", 00:18:48.556 "thread": "nvmf_tgt_poll_group_000", 00:18:48.556 "listen_address": { 00:18:48.556 "trtype": "TCP", 00:18:48.556 "adrfam": "IPv4", 00:18:48.556 "traddr": "10.0.0.2", 00:18:48.556 "trsvcid": "4420" 00:18:48.556 }, 00:18:48.556 "peer_address": { 00:18:48.556 "trtype": "TCP", 00:18:48.556 "adrfam": "IPv4", 00:18:48.556 "traddr": "10.0.0.1", 00:18:48.556 "trsvcid": "59890" 00:18:48.556 }, 00:18:48.556 "auth": { 00:18:48.556 "state": "completed", 00:18:48.556 "digest": "sha256", 00:18:48.556 "dhgroup": "ffdhe8192" 00:18:48.556 } 00:18:48.556 } 00:18:48.556 ]' 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.556 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.814 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.814 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.814 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.814 09:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:18:49.748 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.748 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.748 09:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.748 09:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.006 09:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.006 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.006 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:50.006 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.264 09:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.198 00:18:51.198 09:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.198 09:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.198 09:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.198 09:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.198 09:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.198 09:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.198 09:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.198 09:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.198 09:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.198 { 00:18:51.198 "cntlid": 47, 00:18:51.198 "qid": 0, 00:18:51.198 "state": "enabled", 00:18:51.198 "thread": "nvmf_tgt_poll_group_000", 00:18:51.198 "listen_address": { 00:18:51.198 "trtype": "TCP", 00:18:51.198 "adrfam": "IPv4", 00:18:51.198 "traddr": "10.0.0.2", 00:18:51.198 "trsvcid": "4420" 00:18:51.198 }, 00:18:51.198 "peer_address": { 00:18:51.198 "trtype": "TCP", 00:18:51.198 "adrfam": "IPv4", 00:18:51.198 "traddr": "10.0.0.1", 00:18:51.198 "trsvcid": "59904" 00:18:51.198 }, 00:18:51.198 "auth": { 00:18:51.198 "state": "completed", 00:18:51.198 "digest": "sha256", 00:18:51.198 "dhgroup": "ffdhe8192" 00:18:51.198 } 00:18:51.198 } 00:18:51.198 ]' 00:18:51.198 09:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.456 09:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.456 09:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.456 09:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.456 09:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.456 09:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.456 09:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.456 09:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.714 09:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:18:52.646 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.646 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.646 09:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.646 09:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.646 09:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.646 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:52.646 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.646 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.646 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:52.646 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.903 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.466 00:18:53.466 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.466 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.466 09:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.466 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.466 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.466 09:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.466 09:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.466 09:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.466 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.466 { 00:18:53.466 "cntlid": 49, 00:18:53.466 "qid": 0, 00:18:53.466 "state": "enabled", 00:18:53.466 "thread": "nvmf_tgt_poll_group_000", 00:18:53.466 "listen_address": { 00:18:53.466 "trtype": "TCP", 00:18:53.466 "adrfam": "IPv4", 00:18:53.466 "traddr": "10.0.0.2", 00:18:53.466 "trsvcid": "4420" 00:18:53.466 }, 00:18:53.466 "peer_address": { 00:18:53.466 "trtype": "TCP", 00:18:53.466 "adrfam": "IPv4", 00:18:53.466 "traddr": "10.0.0.1", 00:18:53.466 "trsvcid": "59928" 00:18:53.466 }, 00:18:53.466 "auth": { 00:18:53.466 "state": "completed", 00:18:53.466 "digest": "sha384", 00:18:53.466 "dhgroup": "null" 00:18:53.466 } 00:18:53.466 } 00:18:53.466 ]' 00:18:53.466 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.723 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.723 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.723 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:53.723 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.723 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.723 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.723 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.980 09:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:18:54.913 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.913 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.913 09:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.913 09:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.913 09:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.913 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.913 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:54.913 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.171 09:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.429 00:18:55.429 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.429 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.429 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.687 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.687 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.687 09:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.687 09:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.687 09:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.687 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.687 { 00:18:55.687 "cntlid": 51, 00:18:55.687 "qid": 0, 00:18:55.687 "state": "enabled", 00:18:55.687 "thread": "nvmf_tgt_poll_group_000", 00:18:55.687 "listen_address": { 00:18:55.687 "trtype": "TCP", 00:18:55.687 "adrfam": "IPv4", 00:18:55.687 "traddr": "10.0.0.2", 00:18:55.687 "trsvcid": "4420" 00:18:55.687 }, 00:18:55.687 "peer_address": { 00:18:55.687 "trtype": "TCP", 00:18:55.687 "adrfam": "IPv4", 00:18:55.687 "traddr": "10.0.0.1", 00:18:55.687 "trsvcid": "52676" 00:18:55.687 }, 00:18:55.687 "auth": { 00:18:55.687 "state": "completed", 00:18:55.687 "digest": "sha384", 00:18:55.687 "dhgroup": "null" 00:18:55.687 } 00:18:55.687 } 00:18:55.687 ]' 00:18:55.687 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.687 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.687 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.945 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:55.945 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.945 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.945 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.945 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.203 09:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:18:57.140 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.140 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.140 09:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.140 09:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.140 09:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.140 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.140 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:57.140 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.397 09:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.654 00:18:57.654 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.654 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.654 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.912 { 00:18:57.912 "cntlid": 53, 00:18:57.912 "qid": 0, 00:18:57.912 "state": "enabled", 00:18:57.912 "thread": "nvmf_tgt_poll_group_000", 00:18:57.912 "listen_address": { 00:18:57.912 "trtype": "TCP", 00:18:57.912 "adrfam": "IPv4", 00:18:57.912 "traddr": "10.0.0.2", 00:18:57.912 "trsvcid": "4420" 00:18:57.912 }, 00:18:57.912 "peer_address": { 00:18:57.912 "trtype": "TCP", 00:18:57.912 "adrfam": "IPv4", 00:18:57.912 "traddr": "10.0.0.1", 00:18:57.912 "trsvcid": "52706" 00:18:57.912 }, 00:18:57.912 "auth": { 00:18:57.912 "state": "completed", 00:18:57.912 "digest": "sha384", 00:18:57.912 "dhgroup": "null" 00:18:57.912 } 00:18:57.912 } 00:18:57.912 ]' 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.912 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.171 09:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:18:59.546 09:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.546 09:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.546 09:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.546 09:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.546 09:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.546 09:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.547 09:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:59.547 09:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.547 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.805 00:18:59.805 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.805 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.805 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.063 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.063 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.063 09:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.063 09:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.063 09:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.063 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.063 { 00:19:00.063 "cntlid": 55, 00:19:00.063 "qid": 0, 00:19:00.063 "state": "enabled", 00:19:00.063 "thread": "nvmf_tgt_poll_group_000", 00:19:00.063 "listen_address": { 00:19:00.063 "trtype": "TCP", 00:19:00.063 "adrfam": "IPv4", 00:19:00.063 "traddr": "10.0.0.2", 00:19:00.063 "trsvcid": "4420" 00:19:00.063 }, 00:19:00.063 "peer_address": { 00:19:00.063 "trtype": "TCP", 00:19:00.063 "adrfam": "IPv4", 00:19:00.063 "traddr": "10.0.0.1", 00:19:00.063 "trsvcid": "52740" 00:19:00.063 }, 00:19:00.063 "auth": { 00:19:00.063 "state": "completed", 00:19:00.063 "digest": "sha384", 00:19:00.063 "dhgroup": "null" 00:19:00.063 } 00:19:00.063 } 00:19:00.063 ]' 00:19:00.063 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.063 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.063 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.352 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:00.352 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.352 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.352 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.352 09:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.610 09:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:19:01.546 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.546 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.546 09:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.546 09:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.546 09:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.546 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.546 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.546 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:01.546 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.805 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.063 00:19:02.063 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.063 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.063 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.321 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.321 09:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.321 09:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.321 09:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.321 09:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.321 09:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.321 { 00:19:02.321 "cntlid": 57, 00:19:02.321 "qid": 0, 00:19:02.321 "state": "enabled", 00:19:02.321 "thread": "nvmf_tgt_poll_group_000", 00:19:02.321 "listen_address": { 00:19:02.321 "trtype": "TCP", 00:19:02.321 "adrfam": "IPv4", 00:19:02.321 "traddr": "10.0.0.2", 00:19:02.321 "trsvcid": "4420" 00:19:02.321 }, 00:19:02.321 "peer_address": { 00:19:02.321 "trtype": "TCP", 00:19:02.321 "adrfam": "IPv4", 00:19:02.321 "traddr": "10.0.0.1", 00:19:02.321 "trsvcid": "52764" 00:19:02.321 }, 00:19:02.321 "auth": { 00:19:02.321 "state": "completed", 00:19:02.321 "digest": "sha384", 00:19:02.321 "dhgroup": "ffdhe2048" 00:19:02.321 } 00:19:02.321 } 00:19:02.321 ]' 00:19:02.321 09:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.321 09:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.321 09:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.321 09:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.321 09:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.579 09:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.579 09:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.579 09:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.838 09:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.777 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.345 00:19:04.345 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.345 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.345 09:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.603 { 00:19:04.603 "cntlid": 59, 00:19:04.603 "qid": 0, 00:19:04.603 "state": "enabled", 00:19:04.603 "thread": "nvmf_tgt_poll_group_000", 00:19:04.603 "listen_address": { 00:19:04.603 "trtype": "TCP", 00:19:04.603 "adrfam": "IPv4", 00:19:04.603 "traddr": "10.0.0.2", 00:19:04.603 "trsvcid": "4420" 00:19:04.603 }, 00:19:04.603 "peer_address": { 00:19:04.603 "trtype": "TCP", 00:19:04.603 "adrfam": "IPv4", 00:19:04.603 "traddr": "10.0.0.1", 00:19:04.603 "trsvcid": "52804" 00:19:04.603 }, 00:19:04.603 "auth": { 00:19:04.603 "state": "completed", 00:19:04.603 "digest": "sha384", 00:19:04.603 "dhgroup": "ffdhe2048" 00:19:04.603 } 00:19:04.603 } 00:19:04.603 ]' 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.603 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.860 09:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:19:05.833 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.833 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.833 09:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.833 09:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.833 09:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.833 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.833 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.833 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.092 09:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.350 00:19:06.350 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.350 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.350 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.609 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.609 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.609 09:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.609 09:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.609 09:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.609 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.609 { 00:19:06.609 "cntlid": 61, 00:19:06.609 "qid": 0, 00:19:06.609 "state": "enabled", 00:19:06.609 "thread": "nvmf_tgt_poll_group_000", 00:19:06.609 "listen_address": { 00:19:06.609 "trtype": "TCP", 00:19:06.609 "adrfam": "IPv4", 00:19:06.609 "traddr": "10.0.0.2", 00:19:06.609 "trsvcid": "4420" 00:19:06.609 }, 00:19:06.609 "peer_address": { 00:19:06.609 "trtype": "TCP", 00:19:06.609 "adrfam": "IPv4", 00:19:06.609 "traddr": "10.0.0.1", 00:19:06.609 "trsvcid": "50684" 00:19:06.609 }, 00:19:06.609 "auth": { 00:19:06.609 "state": "completed", 00:19:06.609 "digest": "sha384", 00:19:06.609 "dhgroup": "ffdhe2048" 00:19:06.609 } 00:19:06.609 } 00:19:06.609 ]' 00:19:06.609 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.609 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.609 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.609 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.869 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.869 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.869 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.869 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.128 09:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:19:08.063 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.063 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.063 09:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.063 09:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.063 09:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.063 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.063 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.063 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.320 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.321 09:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.578 00:19:08.578 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.578 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.579 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.145 { 00:19:09.145 "cntlid": 63, 00:19:09.145 "qid": 0, 00:19:09.145 "state": "enabled", 00:19:09.145 "thread": "nvmf_tgt_poll_group_000", 00:19:09.145 "listen_address": { 00:19:09.145 "trtype": "TCP", 00:19:09.145 "adrfam": "IPv4", 00:19:09.145 "traddr": "10.0.0.2", 00:19:09.145 "trsvcid": "4420" 00:19:09.145 }, 00:19:09.145 "peer_address": { 00:19:09.145 "trtype": "TCP", 00:19:09.145 "adrfam": "IPv4", 00:19:09.145 "traddr": "10.0.0.1", 00:19:09.145 "trsvcid": "50702" 00:19:09.145 }, 00:19:09.145 "auth": { 00:19:09.145 "state": "completed", 00:19:09.145 "digest": "sha384", 00:19:09.145 "dhgroup": "ffdhe2048" 00:19:09.145 } 00:19:09.145 } 00:19:09.145 ]' 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.145 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.401 09:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:19:10.336 09:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.336 09:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.336 09:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.336 09:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.336 09:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.336 09:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.336 09:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.336 09:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:10.336 09:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.595 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.161 00:19:11.161 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.161 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.161 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.161 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.161 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.161 09:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.161 09:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.161 09:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.161 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.161 { 00:19:11.161 "cntlid": 65, 00:19:11.161 "qid": 0, 00:19:11.161 "state": "enabled", 00:19:11.161 "thread": "nvmf_tgt_poll_group_000", 00:19:11.161 "listen_address": { 00:19:11.161 "trtype": "TCP", 00:19:11.161 "adrfam": "IPv4", 00:19:11.161 "traddr": "10.0.0.2", 00:19:11.161 "trsvcid": "4420" 00:19:11.161 }, 00:19:11.161 "peer_address": { 00:19:11.161 "trtype": "TCP", 00:19:11.161 "adrfam": "IPv4", 00:19:11.161 "traddr": "10.0.0.1", 00:19:11.161 "trsvcid": "50726" 00:19:11.161 }, 00:19:11.161 "auth": { 00:19:11.161 "state": "completed", 00:19:11.161 "digest": "sha384", 00:19:11.161 "dhgroup": "ffdhe3072" 00:19:11.161 } 00:19:11.161 } 00:19:11.161 ]' 00:19:11.161 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.418 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.418 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.418 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.418 09:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.418 09:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.418 09:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.418 09:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.675 09:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:19:12.610 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.610 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.610 09:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.610 09:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.610 09:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.610 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.610 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:12.610 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.867 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.125 00:19:13.125 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.125 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.125 09:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.384 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.384 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.384 09:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.384 09:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.384 09:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.384 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.384 { 00:19:13.384 "cntlid": 67, 00:19:13.384 "qid": 0, 00:19:13.384 "state": "enabled", 00:19:13.384 "thread": "nvmf_tgt_poll_group_000", 00:19:13.384 "listen_address": { 00:19:13.384 "trtype": "TCP", 00:19:13.384 "adrfam": "IPv4", 00:19:13.384 "traddr": "10.0.0.2", 00:19:13.384 "trsvcid": "4420" 00:19:13.384 }, 00:19:13.384 "peer_address": { 00:19:13.384 "trtype": "TCP", 00:19:13.384 "adrfam": "IPv4", 00:19:13.384 "traddr": "10.0.0.1", 00:19:13.384 "trsvcid": "50752" 00:19:13.384 }, 00:19:13.384 "auth": { 00:19:13.384 "state": "completed", 00:19:13.384 "digest": "sha384", 00:19:13.384 "dhgroup": "ffdhe3072" 00:19:13.384 } 00:19:13.384 } 00:19:13.384 ]' 00:19:13.384 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.384 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.384 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.642 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.642 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.642 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.642 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.642 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.900 09:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:19:14.860 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.860 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.860 09:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.860 09:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.860 09:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.860 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.860 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:14.860 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.117 09:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.374 00:19:15.374 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.374 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.374 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.630 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.630 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.630 09:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.630 09:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.630 09:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.630 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.630 { 00:19:15.630 "cntlid": 69, 00:19:15.630 "qid": 0, 00:19:15.630 "state": "enabled", 00:19:15.630 "thread": "nvmf_tgt_poll_group_000", 00:19:15.630 "listen_address": { 00:19:15.630 "trtype": "TCP", 00:19:15.630 "adrfam": "IPv4", 00:19:15.630 "traddr": "10.0.0.2", 00:19:15.630 "trsvcid": "4420" 00:19:15.630 }, 00:19:15.630 "peer_address": { 00:19:15.630 "trtype": "TCP", 00:19:15.630 "adrfam": "IPv4", 00:19:15.630 "traddr": "10.0.0.1", 00:19:15.630 "trsvcid": "50576" 00:19:15.630 }, 00:19:15.630 "auth": { 00:19:15.630 "state": "completed", 00:19:15.630 "digest": "sha384", 00:19:15.630 "dhgroup": "ffdhe3072" 00:19:15.630 } 00:19:15.630 } 00:19:15.630 ]' 00:19:15.630 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.630 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.630 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.887 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.887 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.887 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.887 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.887 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.144 09:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:19:17.073 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.073 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.073 09:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.073 09:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.073 09:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.073 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.073 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.073 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.330 09:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.587 00:19:17.587 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.588 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.588 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.845 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.845 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.845 09:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.845 09:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.103 09:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.103 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.103 { 00:19:18.103 "cntlid": 71, 00:19:18.103 "qid": 0, 00:19:18.103 "state": "enabled", 00:19:18.103 "thread": "nvmf_tgt_poll_group_000", 00:19:18.103 "listen_address": { 00:19:18.103 "trtype": "TCP", 00:19:18.103 "adrfam": "IPv4", 00:19:18.103 "traddr": "10.0.0.2", 00:19:18.103 "trsvcid": "4420" 00:19:18.103 }, 00:19:18.103 "peer_address": { 00:19:18.103 "trtype": "TCP", 00:19:18.103 "adrfam": "IPv4", 00:19:18.103 "traddr": "10.0.0.1", 00:19:18.103 "trsvcid": "50610" 00:19:18.103 }, 00:19:18.103 "auth": { 00:19:18.103 "state": "completed", 00:19:18.103 "digest": "sha384", 00:19:18.103 "dhgroup": "ffdhe3072" 00:19:18.103 } 00:19:18.103 } 00:19:18.103 ]' 00:19:18.103 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.103 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.103 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.103 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.103 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.103 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.103 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.103 09:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.361 09:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:19:19.295 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.295 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.295 09:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.295 09:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.295 09:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.295 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.295 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.295 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:19.295 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:19.552 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:19.552 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.552 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.552 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:19.552 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.552 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.553 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.553 09:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.553 09:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.553 09:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.553 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.553 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.118 00:19:20.118 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.118 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.118 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.376 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.376 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.376 09:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.376 09:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.376 09:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.376 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.376 { 00:19:20.376 "cntlid": 73, 00:19:20.376 "qid": 0, 00:19:20.376 "state": "enabled", 00:19:20.376 "thread": "nvmf_tgt_poll_group_000", 00:19:20.376 "listen_address": { 00:19:20.376 "trtype": "TCP", 00:19:20.376 "adrfam": "IPv4", 00:19:20.376 "traddr": "10.0.0.2", 00:19:20.376 "trsvcid": "4420" 00:19:20.376 }, 00:19:20.376 "peer_address": { 00:19:20.376 "trtype": "TCP", 00:19:20.376 "adrfam": "IPv4", 00:19:20.376 "traddr": "10.0.0.1", 00:19:20.376 "trsvcid": "50646" 00:19:20.376 }, 00:19:20.376 "auth": { 00:19:20.376 "state": "completed", 00:19:20.376 "digest": "sha384", 00:19:20.376 "dhgroup": "ffdhe4096" 00:19:20.376 } 00:19:20.376 } 00:19:20.376 ]' 00:19:20.376 09:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.376 09:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.376 09:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.376 09:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.376 09:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.376 09:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.376 09:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.376 09:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.634 09:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:19:21.568 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.568 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.568 09:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.568 09:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.825 09:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.825 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.825 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.825 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.825 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:21.825 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.825 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.825 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:21.825 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.825 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.826 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.826 09:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.826 09:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.826 09:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.826 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.826 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.391 00:19:22.391 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.391 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.391 09:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.649 { 00:19:22.649 "cntlid": 75, 00:19:22.649 "qid": 0, 00:19:22.649 "state": "enabled", 00:19:22.649 "thread": "nvmf_tgt_poll_group_000", 00:19:22.649 "listen_address": { 00:19:22.649 "trtype": "TCP", 00:19:22.649 "adrfam": "IPv4", 00:19:22.649 "traddr": "10.0.0.2", 00:19:22.649 "trsvcid": "4420" 00:19:22.649 }, 00:19:22.649 "peer_address": { 00:19:22.649 "trtype": "TCP", 00:19:22.649 "adrfam": "IPv4", 00:19:22.649 "traddr": "10.0.0.1", 00:19:22.649 "trsvcid": "50674" 00:19:22.649 }, 00:19:22.649 "auth": { 00:19:22.649 "state": "completed", 00:19:22.649 "digest": "sha384", 00:19:22.649 "dhgroup": "ffdhe4096" 00:19:22.649 } 00:19:22.649 } 00:19:22.649 ]' 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.649 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.907 09:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:19:23.838 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.838 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.838 09:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.838 09:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.838 09:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.838 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.838 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.838 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.096 09:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.661 00:19:24.661 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.661 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.661 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.919 { 00:19:24.919 "cntlid": 77, 00:19:24.919 "qid": 0, 00:19:24.919 "state": "enabled", 00:19:24.919 "thread": "nvmf_tgt_poll_group_000", 00:19:24.919 "listen_address": { 00:19:24.919 "trtype": "TCP", 00:19:24.919 "adrfam": "IPv4", 00:19:24.919 "traddr": "10.0.0.2", 00:19:24.919 "trsvcid": "4420" 00:19:24.919 }, 00:19:24.919 "peer_address": { 00:19:24.919 "trtype": "TCP", 00:19:24.919 "adrfam": "IPv4", 00:19:24.919 "traddr": "10.0.0.1", 00:19:24.919 "trsvcid": "52432" 00:19:24.919 }, 00:19:24.919 "auth": { 00:19:24.919 "state": "completed", 00:19:24.919 "digest": "sha384", 00:19:24.919 "dhgroup": "ffdhe4096" 00:19:24.919 } 00:19:24.919 } 00:19:24.919 ]' 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.919 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.177 09:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:19:26.109 09:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.109 09:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.109 09:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.109 09:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.109 09:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.109 09:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.109 09:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.109 09:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.408 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.996 00:19:26.996 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.996 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.996 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.254 { 00:19:27.254 "cntlid": 79, 00:19:27.254 "qid": 0, 00:19:27.254 "state": "enabled", 00:19:27.254 "thread": "nvmf_tgt_poll_group_000", 00:19:27.254 "listen_address": { 00:19:27.254 "trtype": "TCP", 00:19:27.254 "adrfam": "IPv4", 00:19:27.254 "traddr": "10.0.0.2", 00:19:27.254 "trsvcid": "4420" 00:19:27.254 }, 00:19:27.254 "peer_address": { 00:19:27.254 "trtype": "TCP", 00:19:27.254 "adrfam": "IPv4", 00:19:27.254 "traddr": "10.0.0.1", 00:19:27.254 "trsvcid": "52472" 00:19:27.254 }, 00:19:27.254 "auth": { 00:19:27.254 "state": "completed", 00:19:27.254 "digest": "sha384", 00:19:27.254 "dhgroup": "ffdhe4096" 00:19:27.254 } 00:19:27.254 } 00:19:27.254 ]' 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.254 09:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.514 09:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:19:28.451 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.451 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.451 09:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.451 09:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.451 09:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.451 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.451 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.451 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.451 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.710 09:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.646 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.646 { 00:19:29.646 "cntlid": 81, 00:19:29.646 "qid": 0, 00:19:29.646 "state": "enabled", 00:19:29.646 "thread": "nvmf_tgt_poll_group_000", 00:19:29.646 "listen_address": { 00:19:29.646 "trtype": "TCP", 00:19:29.646 "adrfam": "IPv4", 00:19:29.646 "traddr": "10.0.0.2", 00:19:29.646 "trsvcid": "4420" 00:19:29.646 }, 00:19:29.646 "peer_address": { 00:19:29.646 "trtype": "TCP", 00:19:29.646 "adrfam": "IPv4", 00:19:29.646 "traddr": "10.0.0.1", 00:19:29.646 "trsvcid": "52506" 00:19:29.646 }, 00:19:29.646 "auth": { 00:19:29.646 "state": "completed", 00:19:29.646 "digest": "sha384", 00:19:29.646 "dhgroup": "ffdhe6144" 00:19:29.646 } 00:19:29.646 } 00:19:29.646 ]' 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:29.646 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.905 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.905 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.905 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.163 09:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.101 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.102 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.102 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.102 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.102 09:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.102 09:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.361 09:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.361 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.361 09:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.929 00:19:31.929 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.929 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.929 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.929 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.929 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.929 09:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.929 09:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.929 09:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.929 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.929 { 00:19:31.929 "cntlid": 83, 00:19:31.929 "qid": 0, 00:19:31.929 "state": "enabled", 00:19:31.929 "thread": "nvmf_tgt_poll_group_000", 00:19:31.929 "listen_address": { 00:19:31.929 "trtype": "TCP", 00:19:31.929 "adrfam": "IPv4", 00:19:31.929 "traddr": "10.0.0.2", 00:19:31.929 "trsvcid": "4420" 00:19:31.929 }, 00:19:31.929 "peer_address": { 00:19:31.929 "trtype": "TCP", 00:19:31.929 "adrfam": "IPv4", 00:19:31.929 "traddr": "10.0.0.1", 00:19:31.929 "trsvcid": "52534" 00:19:31.929 }, 00:19:31.929 "auth": { 00:19:31.929 "state": "completed", 00:19:31.929 "digest": "sha384", 00:19:31.929 "dhgroup": "ffdhe6144" 00:19:31.929 } 00:19:31.929 } 00:19:31.929 ]' 00:19:31.929 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.194 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.194 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.194 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.194 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.194 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.194 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.194 09:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.452 09:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:19:33.388 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.388 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.388 09:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.388 09:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.388 09:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.388 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.388 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.388 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.648 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.218 00:19:34.218 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.218 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.218 09:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.476 { 00:19:34.476 "cntlid": 85, 00:19:34.476 "qid": 0, 00:19:34.476 "state": "enabled", 00:19:34.476 "thread": "nvmf_tgt_poll_group_000", 00:19:34.476 "listen_address": { 00:19:34.476 "trtype": "TCP", 00:19:34.476 "adrfam": "IPv4", 00:19:34.476 "traddr": "10.0.0.2", 00:19:34.476 "trsvcid": "4420" 00:19:34.476 }, 00:19:34.476 "peer_address": { 00:19:34.476 "trtype": "TCP", 00:19:34.476 "adrfam": "IPv4", 00:19:34.476 "traddr": "10.0.0.1", 00:19:34.476 "trsvcid": "52564" 00:19:34.476 }, 00:19:34.476 "auth": { 00:19:34.476 "state": "completed", 00:19:34.476 "digest": "sha384", 00:19:34.476 "dhgroup": "ffdhe6144" 00:19:34.476 } 00:19:34.476 } 00:19:34.476 ]' 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.476 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.735 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.735 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.735 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.993 09:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:19:35.928 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.928 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.928 09:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.928 09:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.928 09:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.928 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.928 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.928 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.186 09:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.752 00:19:36.752 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.752 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.752 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.011 { 00:19:37.011 "cntlid": 87, 00:19:37.011 "qid": 0, 00:19:37.011 "state": "enabled", 00:19:37.011 "thread": "nvmf_tgt_poll_group_000", 00:19:37.011 "listen_address": { 00:19:37.011 "trtype": "TCP", 00:19:37.011 "adrfam": "IPv4", 00:19:37.011 "traddr": "10.0.0.2", 00:19:37.011 "trsvcid": "4420" 00:19:37.011 }, 00:19:37.011 "peer_address": { 00:19:37.011 "trtype": "TCP", 00:19:37.011 "adrfam": "IPv4", 00:19:37.011 "traddr": "10.0.0.1", 00:19:37.011 "trsvcid": "50440" 00:19:37.011 }, 00:19:37.011 "auth": { 00:19:37.011 "state": "completed", 00:19:37.011 "digest": "sha384", 00:19:37.011 "dhgroup": "ffdhe6144" 00:19:37.011 } 00:19:37.011 } 00:19:37.011 ]' 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.011 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.269 09:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:19:38.204 09:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.204 09:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.204 09:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.204 09:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.204 09:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.204 09:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.204 09:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.204 09:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:38.204 09:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.462 09:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.402 00:19:39.402 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.402 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.402 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.660 { 00:19:39.660 "cntlid": 89, 00:19:39.660 "qid": 0, 00:19:39.660 "state": "enabled", 00:19:39.660 "thread": "nvmf_tgt_poll_group_000", 00:19:39.660 "listen_address": { 00:19:39.660 "trtype": "TCP", 00:19:39.660 "adrfam": "IPv4", 00:19:39.660 "traddr": "10.0.0.2", 00:19:39.660 "trsvcid": "4420" 00:19:39.660 }, 00:19:39.660 "peer_address": { 00:19:39.660 "trtype": "TCP", 00:19:39.660 "adrfam": "IPv4", 00:19:39.660 "traddr": "10.0.0.1", 00:19:39.660 "trsvcid": "50462" 00:19:39.660 }, 00:19:39.660 "auth": { 00:19:39.660 "state": "completed", 00:19:39.660 "digest": "sha384", 00:19:39.660 "dhgroup": "ffdhe8192" 00:19:39.660 } 00:19:39.660 } 00:19:39.660 ]' 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.660 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.919 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.919 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.919 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.177 09:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:19:41.110 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.110 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.110 09:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.110 09:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.110 09:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.110 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.110 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:41.110 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.368 09:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.301 00:19:42.301 09:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.301 09:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.301 09:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.301 09:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.301 09:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.301 09:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.301 09:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.301 09:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.301 09:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.301 { 00:19:42.301 "cntlid": 91, 00:19:42.301 "qid": 0, 00:19:42.301 "state": "enabled", 00:19:42.301 "thread": "nvmf_tgt_poll_group_000", 00:19:42.301 "listen_address": { 00:19:42.301 "trtype": "TCP", 00:19:42.301 "adrfam": "IPv4", 00:19:42.301 "traddr": "10.0.0.2", 00:19:42.301 "trsvcid": "4420" 00:19:42.301 }, 00:19:42.301 "peer_address": { 00:19:42.301 "trtype": "TCP", 00:19:42.301 "adrfam": "IPv4", 00:19:42.301 "traddr": "10.0.0.1", 00:19:42.301 "trsvcid": "50486" 00:19:42.301 }, 00:19:42.301 "auth": { 00:19:42.301 "state": "completed", 00:19:42.301 "digest": "sha384", 00:19:42.301 "dhgroup": "ffdhe8192" 00:19:42.301 } 00:19:42.301 } 00:19:42.301 ]' 00:19:42.301 09:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.301 09:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.301 09:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.301 09:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.301 09:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.557 09:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.557 09:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.557 09:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.814 09:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:19:43.750 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.750 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.751 09:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.751 09:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.751 09:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.751 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.751 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:43.751 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.008 09:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.939 00:19:44.939 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.939 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.939 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.197 { 00:19:45.197 "cntlid": 93, 00:19:45.197 "qid": 0, 00:19:45.197 "state": "enabled", 00:19:45.197 "thread": "nvmf_tgt_poll_group_000", 00:19:45.197 "listen_address": { 00:19:45.197 "trtype": "TCP", 00:19:45.197 "adrfam": "IPv4", 00:19:45.197 "traddr": "10.0.0.2", 00:19:45.197 "trsvcid": "4420" 00:19:45.197 }, 00:19:45.197 "peer_address": { 00:19:45.197 "trtype": "TCP", 00:19:45.197 "adrfam": "IPv4", 00:19:45.197 "traddr": "10.0.0.1", 00:19:45.197 "trsvcid": "50530" 00:19:45.197 }, 00:19:45.197 "auth": { 00:19:45.197 "state": "completed", 00:19:45.197 "digest": "sha384", 00:19:45.197 "dhgroup": "ffdhe8192" 00:19:45.197 } 00:19:45.197 } 00:19:45.197 ]' 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.197 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.455 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.455 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.455 09:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.713 09:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:19:46.645 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.645 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.645 09:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.645 09:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.645 09:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.645 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.645 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.645 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.900 09:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.879 00:19:47.879 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.879 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.879 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.879 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.879 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.879 09:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.879 09:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.135 09:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.135 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.135 { 00:19:48.135 "cntlid": 95, 00:19:48.135 "qid": 0, 00:19:48.135 "state": "enabled", 00:19:48.135 "thread": "nvmf_tgt_poll_group_000", 00:19:48.135 "listen_address": { 00:19:48.135 "trtype": "TCP", 00:19:48.135 "adrfam": "IPv4", 00:19:48.135 "traddr": "10.0.0.2", 00:19:48.135 "trsvcid": "4420" 00:19:48.135 }, 00:19:48.135 "peer_address": { 00:19:48.135 "trtype": "TCP", 00:19:48.135 "adrfam": "IPv4", 00:19:48.135 "traddr": "10.0.0.1", 00:19:48.135 "trsvcid": "58962" 00:19:48.135 }, 00:19:48.135 "auth": { 00:19:48.135 "state": "completed", 00:19:48.135 "digest": "sha384", 00:19:48.135 "dhgroup": "ffdhe8192" 00:19:48.135 } 00:19:48.135 } 00:19:48.135 ]' 00:19:48.135 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.135 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.135 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.135 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.135 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.135 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.135 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.135 09:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.393 09:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:19:49.325 09:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.325 09:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.325 09:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.325 09:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.325 09:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.325 09:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:49.325 09:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.325 09:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.325 09:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.325 09:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.583 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.841 00:19:49.841 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.841 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.841 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.098 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.098 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.098 09:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.098 09:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.098 09:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.098 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.098 { 00:19:50.098 "cntlid": 97, 00:19:50.098 "qid": 0, 00:19:50.098 "state": "enabled", 00:19:50.098 "thread": "nvmf_tgt_poll_group_000", 00:19:50.098 "listen_address": { 00:19:50.098 "trtype": "TCP", 00:19:50.098 "adrfam": "IPv4", 00:19:50.098 "traddr": "10.0.0.2", 00:19:50.098 "trsvcid": "4420" 00:19:50.098 }, 00:19:50.098 "peer_address": { 00:19:50.098 "trtype": "TCP", 00:19:50.098 "adrfam": "IPv4", 00:19:50.098 "traddr": "10.0.0.1", 00:19:50.098 "trsvcid": "58978" 00:19:50.098 }, 00:19:50.098 "auth": { 00:19:50.098 "state": "completed", 00:19:50.098 "digest": "sha512", 00:19:50.098 "dhgroup": "null" 00:19:50.098 } 00:19:50.098 } 00:19:50.098 ]' 00:19:50.098 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.355 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.355 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.355 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:50.355 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.355 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.356 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.356 09:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.612 09:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:19:51.545 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.545 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.545 09:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.545 09:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.545 09:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.545 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.545 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.545 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.803 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.062 00:19:52.062 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.062 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.062 09:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.320 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.320 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.320 09:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.320 09:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.320 09:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.320 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.320 { 00:19:52.320 "cntlid": 99, 00:19:52.320 "qid": 0, 00:19:52.320 "state": "enabled", 00:19:52.320 "thread": "nvmf_tgt_poll_group_000", 00:19:52.320 "listen_address": { 00:19:52.320 "trtype": "TCP", 00:19:52.320 "adrfam": "IPv4", 00:19:52.320 "traddr": "10.0.0.2", 00:19:52.320 "trsvcid": "4420" 00:19:52.320 }, 00:19:52.320 "peer_address": { 00:19:52.320 "trtype": "TCP", 00:19:52.320 "adrfam": "IPv4", 00:19:52.320 "traddr": "10.0.0.1", 00:19:52.320 "trsvcid": "59006" 00:19:52.320 }, 00:19:52.320 "auth": { 00:19:52.320 "state": "completed", 00:19:52.320 "digest": "sha512", 00:19:52.320 "dhgroup": "null" 00:19:52.320 } 00:19:52.320 } 00:19:52.320 ]' 00:19:52.320 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.320 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.320 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.578 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:52.578 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.578 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.578 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.578 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.836 09:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:19:53.771 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.771 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.771 09:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.771 09:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.771 09:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.771 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.771 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.771 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.029 09:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.287 00:19:54.287 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.287 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.287 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.545 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.545 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.545 09:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.545 09:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.545 09:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.545 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.545 { 00:19:54.545 "cntlid": 101, 00:19:54.545 "qid": 0, 00:19:54.545 "state": "enabled", 00:19:54.545 "thread": "nvmf_tgt_poll_group_000", 00:19:54.545 "listen_address": { 00:19:54.545 "trtype": "TCP", 00:19:54.545 "adrfam": "IPv4", 00:19:54.545 "traddr": "10.0.0.2", 00:19:54.545 "trsvcid": "4420" 00:19:54.545 }, 00:19:54.545 "peer_address": { 00:19:54.545 "trtype": "TCP", 00:19:54.545 "adrfam": "IPv4", 00:19:54.545 "traddr": "10.0.0.1", 00:19:54.545 "trsvcid": "59046" 00:19:54.545 }, 00:19:54.545 "auth": { 00:19:54.545 "state": "completed", 00:19:54.545 "digest": "sha512", 00:19:54.545 "dhgroup": "null" 00:19:54.545 } 00:19:54.545 } 00:19:54.545 ]' 00:19:54.545 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.545 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.545 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.802 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:54.802 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.802 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.802 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.802 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.060 09:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:19:56.029 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.029 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.029 09:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.029 09:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.029 09:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.029 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.029 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.029 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.288 09:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.546 00:19:56.546 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.546 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.546 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.803 { 00:19:56.803 "cntlid": 103, 00:19:56.803 "qid": 0, 00:19:56.803 "state": "enabled", 00:19:56.803 "thread": "nvmf_tgt_poll_group_000", 00:19:56.803 "listen_address": { 00:19:56.803 "trtype": "TCP", 00:19:56.803 "adrfam": "IPv4", 00:19:56.803 "traddr": "10.0.0.2", 00:19:56.803 "trsvcid": "4420" 00:19:56.803 }, 00:19:56.803 "peer_address": { 00:19:56.803 "trtype": "TCP", 00:19:56.803 "adrfam": "IPv4", 00:19:56.803 "traddr": "10.0.0.1", 00:19:56.803 "trsvcid": "49688" 00:19:56.803 }, 00:19:56.803 "auth": { 00:19:56.803 "state": "completed", 00:19:56.803 "digest": "sha512", 00:19:56.803 "dhgroup": "null" 00:19:56.803 } 00:19:56.803 } 00:19:56.803 ]' 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:56.803 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.060 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.060 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.060 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.060 09:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:19:57.990 09:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.248 09:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.248 09:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.248 09:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.248 09:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.248 09:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.248 09:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.248 09:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:58.248 09:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.506 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.765 00:19:58.765 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.765 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.765 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.023 { 00:19:59.023 "cntlid": 105, 00:19:59.023 "qid": 0, 00:19:59.023 "state": "enabled", 00:19:59.023 "thread": "nvmf_tgt_poll_group_000", 00:19:59.023 "listen_address": { 00:19:59.023 "trtype": "TCP", 00:19:59.023 "adrfam": "IPv4", 00:19:59.023 "traddr": "10.0.0.2", 00:19:59.023 "trsvcid": "4420" 00:19:59.023 }, 00:19:59.023 "peer_address": { 00:19:59.023 "trtype": "TCP", 00:19:59.023 "adrfam": "IPv4", 00:19:59.023 "traddr": "10.0.0.1", 00:19:59.023 "trsvcid": "49724" 00:19:59.023 }, 00:19:59.023 "auth": { 00:19:59.023 "state": "completed", 00:19:59.023 "digest": "sha512", 00:19:59.023 "dhgroup": "ffdhe2048" 00:19:59.023 } 00:19:59.023 } 00:19:59.023 ]' 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.023 09:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.282 09:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:20:00.214 09:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.473 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.473 09:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.473 09:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.473 09:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.473 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.473 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:00.473 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.732 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.990 00:20:00.990 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.990 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.990 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.248 { 00:20:01.248 "cntlid": 107, 00:20:01.248 "qid": 0, 00:20:01.248 "state": "enabled", 00:20:01.248 "thread": "nvmf_tgt_poll_group_000", 00:20:01.248 "listen_address": { 00:20:01.248 "trtype": "TCP", 00:20:01.248 "adrfam": "IPv4", 00:20:01.248 "traddr": "10.0.0.2", 00:20:01.248 "trsvcid": "4420" 00:20:01.248 }, 00:20:01.248 "peer_address": { 00:20:01.248 "trtype": "TCP", 00:20:01.248 "adrfam": "IPv4", 00:20:01.248 "traddr": "10.0.0.1", 00:20:01.248 "trsvcid": "49744" 00:20:01.248 }, 00:20:01.248 "auth": { 00:20:01.248 "state": "completed", 00:20:01.248 "digest": "sha512", 00:20:01.248 "dhgroup": "ffdhe2048" 00:20:01.248 } 00:20:01.248 } 00:20:01.248 ]' 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.248 09:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.248 09:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.248 09:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.248 09:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.507 09:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.880 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.138 00:20:03.138 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.138 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.138 09:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.396 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.396 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.396 09:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.396 09:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.396 09:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.396 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.396 { 00:20:03.396 "cntlid": 109, 00:20:03.396 "qid": 0, 00:20:03.396 "state": "enabled", 00:20:03.396 "thread": "nvmf_tgt_poll_group_000", 00:20:03.396 "listen_address": { 00:20:03.396 "trtype": "TCP", 00:20:03.396 "adrfam": "IPv4", 00:20:03.396 "traddr": "10.0.0.2", 00:20:03.396 "trsvcid": "4420" 00:20:03.396 }, 00:20:03.396 "peer_address": { 00:20:03.396 "trtype": "TCP", 00:20:03.396 "adrfam": "IPv4", 00:20:03.396 "traddr": "10.0.0.1", 00:20:03.396 "trsvcid": "49762" 00:20:03.396 }, 00:20:03.396 "auth": { 00:20:03.396 "state": "completed", 00:20:03.396 "digest": "sha512", 00:20:03.396 "dhgroup": "ffdhe2048" 00:20:03.396 } 00:20:03.396 } 00:20:03.396 ]' 00:20:03.396 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.396 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.396 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.653 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.653 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.653 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.653 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.653 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.910 09:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:20:04.842 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.842 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.842 09:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.842 09:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.842 09:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.842 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.842 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:04.842 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.099 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.356 00:20:05.356 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.356 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.356 09:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.614 { 00:20:05.614 "cntlid": 111, 00:20:05.614 "qid": 0, 00:20:05.614 "state": "enabled", 00:20:05.614 "thread": "nvmf_tgt_poll_group_000", 00:20:05.614 "listen_address": { 00:20:05.614 "trtype": "TCP", 00:20:05.614 "adrfam": "IPv4", 00:20:05.614 "traddr": "10.0.0.2", 00:20:05.614 "trsvcid": "4420" 00:20:05.614 }, 00:20:05.614 "peer_address": { 00:20:05.614 "trtype": "TCP", 00:20:05.614 "adrfam": "IPv4", 00:20:05.614 "traddr": "10.0.0.1", 00:20:05.614 "trsvcid": "52134" 00:20:05.614 }, 00:20:05.614 "auth": { 00:20:05.614 "state": "completed", 00:20:05.614 "digest": "sha512", 00:20:05.614 "dhgroup": "ffdhe2048" 00:20:05.614 } 00:20:05.614 } 00:20:05.614 ]' 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.614 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.871 09:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:20:06.803 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.803 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.803 09:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.803 09:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.060 09:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.317 09:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.317 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.317 09:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.575 00:20:07.575 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.575 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.575 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.832 { 00:20:07.832 "cntlid": 113, 00:20:07.832 "qid": 0, 00:20:07.832 "state": "enabled", 00:20:07.832 "thread": "nvmf_tgt_poll_group_000", 00:20:07.832 "listen_address": { 00:20:07.832 "trtype": "TCP", 00:20:07.832 "adrfam": "IPv4", 00:20:07.832 "traddr": "10.0.0.2", 00:20:07.832 "trsvcid": "4420" 00:20:07.832 }, 00:20:07.832 "peer_address": { 00:20:07.832 "trtype": "TCP", 00:20:07.832 "adrfam": "IPv4", 00:20:07.832 "traddr": "10.0.0.1", 00:20:07.832 "trsvcid": "52168" 00:20:07.832 }, 00:20:07.832 "auth": { 00:20:07.832 "state": "completed", 00:20:07.832 "digest": "sha512", 00:20:07.832 "dhgroup": "ffdhe3072" 00:20:07.832 } 00:20:07.832 } 00:20:07.832 ]' 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.832 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.090 09:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:20:09.022 09:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.022 09:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.022 09:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.022 09:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.280 09:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.280 09:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.280 09:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.280 09:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.280 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.845 00:20:09.845 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.845 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.845 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.158 { 00:20:10.158 "cntlid": 115, 00:20:10.158 "qid": 0, 00:20:10.158 "state": "enabled", 00:20:10.158 "thread": "nvmf_tgt_poll_group_000", 00:20:10.158 "listen_address": { 00:20:10.158 "trtype": "TCP", 00:20:10.158 "adrfam": "IPv4", 00:20:10.158 "traddr": "10.0.0.2", 00:20:10.158 "trsvcid": "4420" 00:20:10.158 }, 00:20:10.158 "peer_address": { 00:20:10.158 "trtype": "TCP", 00:20:10.158 "adrfam": "IPv4", 00:20:10.158 "traddr": "10.0.0.1", 00:20:10.158 "trsvcid": "52206" 00:20:10.158 }, 00:20:10.158 "auth": { 00:20:10.158 "state": "completed", 00:20:10.158 "digest": "sha512", 00:20:10.158 "dhgroup": "ffdhe3072" 00:20:10.158 } 00:20:10.158 } 00:20:10.158 ]' 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.158 09:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.417 09:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:20:11.358 09:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.358 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.358 09:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.358 09:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.358 09:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.358 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.358 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.358 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.616 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.185 00:20:12.185 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.185 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.185 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.185 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.443 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.443 09:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.443 09:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.443 09:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.443 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.443 { 00:20:12.443 "cntlid": 117, 00:20:12.443 "qid": 0, 00:20:12.443 "state": "enabled", 00:20:12.443 "thread": "nvmf_tgt_poll_group_000", 00:20:12.443 "listen_address": { 00:20:12.443 "trtype": "TCP", 00:20:12.443 "adrfam": "IPv4", 00:20:12.443 "traddr": "10.0.0.2", 00:20:12.443 "trsvcid": "4420" 00:20:12.443 }, 00:20:12.443 "peer_address": { 00:20:12.443 "trtype": "TCP", 00:20:12.443 "adrfam": "IPv4", 00:20:12.443 "traddr": "10.0.0.1", 00:20:12.443 "trsvcid": "52236" 00:20:12.443 }, 00:20:12.443 "auth": { 00:20:12.443 "state": "completed", 00:20:12.443 "digest": "sha512", 00:20:12.443 "dhgroup": "ffdhe3072" 00:20:12.443 } 00:20:12.443 } 00:20:12.443 ]' 00:20:12.443 09:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.443 09:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.443 09:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.443 09:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.443 09:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.443 09:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.443 09:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.443 09:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.701 09:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:20:13.637 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.637 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.637 09:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.637 09:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.637 09:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.637 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.637 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:13.637 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.896 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.153 00:20:14.153 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.153 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.153 09:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.411 { 00:20:14.411 "cntlid": 119, 00:20:14.411 "qid": 0, 00:20:14.411 "state": "enabled", 00:20:14.411 "thread": "nvmf_tgt_poll_group_000", 00:20:14.411 "listen_address": { 00:20:14.411 "trtype": "TCP", 00:20:14.411 "adrfam": "IPv4", 00:20:14.411 "traddr": "10.0.0.2", 00:20:14.411 "trsvcid": "4420" 00:20:14.411 }, 00:20:14.411 "peer_address": { 00:20:14.411 "trtype": "TCP", 00:20:14.411 "adrfam": "IPv4", 00:20:14.411 "traddr": "10.0.0.1", 00:20:14.411 "trsvcid": "52272" 00:20:14.411 }, 00:20:14.411 "auth": { 00:20:14.411 "state": "completed", 00:20:14.411 "digest": "sha512", 00:20:14.411 "dhgroup": "ffdhe3072" 00:20:14.411 } 00:20:14.411 } 00:20:14.411 ]' 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.411 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.669 09:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.048 09:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.617 00:20:16.617 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.617 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.617 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.874 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.875 { 00:20:16.875 "cntlid": 121, 00:20:16.875 "qid": 0, 00:20:16.875 "state": "enabled", 00:20:16.875 "thread": "nvmf_tgt_poll_group_000", 00:20:16.875 "listen_address": { 00:20:16.875 "trtype": "TCP", 00:20:16.875 "adrfam": "IPv4", 00:20:16.875 "traddr": "10.0.0.2", 00:20:16.875 "trsvcid": "4420" 00:20:16.875 }, 00:20:16.875 "peer_address": { 00:20:16.875 "trtype": "TCP", 00:20:16.875 "adrfam": "IPv4", 00:20:16.875 "traddr": "10.0.0.1", 00:20:16.875 "trsvcid": "54452" 00:20:16.875 }, 00:20:16.875 "auth": { 00:20:16.875 "state": "completed", 00:20:16.875 "digest": "sha512", 00:20:16.875 "dhgroup": "ffdhe4096" 00:20:16.875 } 00:20:16.875 } 00:20:16.875 ]' 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.875 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.133 09:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:20:18.068 09:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.069 09:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.069 09:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.069 09:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.069 09:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.069 09:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.069 09:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.069 09:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.327 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.893 00:20:18.893 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.893 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.893 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.152 { 00:20:19.152 "cntlid": 123, 00:20:19.152 "qid": 0, 00:20:19.152 "state": "enabled", 00:20:19.152 "thread": "nvmf_tgt_poll_group_000", 00:20:19.152 "listen_address": { 00:20:19.152 "trtype": "TCP", 00:20:19.152 "adrfam": "IPv4", 00:20:19.152 "traddr": "10.0.0.2", 00:20:19.152 "trsvcid": "4420" 00:20:19.152 }, 00:20:19.152 "peer_address": { 00:20:19.152 "trtype": "TCP", 00:20:19.152 "adrfam": "IPv4", 00:20:19.152 "traddr": "10.0.0.1", 00:20:19.152 "trsvcid": "54474" 00:20:19.152 }, 00:20:19.152 "auth": { 00:20:19.152 "state": "completed", 00:20:19.152 "digest": "sha512", 00:20:19.152 "dhgroup": "ffdhe4096" 00:20:19.152 } 00:20:19.152 } 00:20:19.152 ]' 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.152 09:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.410 09:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:20:20.343 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.343 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.343 09:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.343 09:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.343 09:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.343 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.343 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.343 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.602 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.168 00:20:21.168 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.168 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.168 09:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.425 { 00:20:21.425 "cntlid": 125, 00:20:21.425 "qid": 0, 00:20:21.425 "state": "enabled", 00:20:21.425 "thread": "nvmf_tgt_poll_group_000", 00:20:21.425 "listen_address": { 00:20:21.425 "trtype": "TCP", 00:20:21.425 "adrfam": "IPv4", 00:20:21.425 "traddr": "10.0.0.2", 00:20:21.425 "trsvcid": "4420" 00:20:21.425 }, 00:20:21.425 "peer_address": { 00:20:21.425 "trtype": "TCP", 00:20:21.425 "adrfam": "IPv4", 00:20:21.425 "traddr": "10.0.0.1", 00:20:21.425 "trsvcid": "54500" 00:20:21.425 }, 00:20:21.425 "auth": { 00:20:21.425 "state": "completed", 00:20:21.425 "digest": "sha512", 00:20:21.425 "dhgroup": "ffdhe4096" 00:20:21.425 } 00:20:21.425 } 00:20:21.425 ]' 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.425 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.685 09:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:20:22.620 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.620 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.620 09:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.620 09:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.620 09:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.620 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.620 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.620 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.190 09:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.449 00:20:23.449 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.449 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.449 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.707 { 00:20:23.707 "cntlid": 127, 00:20:23.707 "qid": 0, 00:20:23.707 "state": "enabled", 00:20:23.707 "thread": "nvmf_tgt_poll_group_000", 00:20:23.707 "listen_address": { 00:20:23.707 "trtype": "TCP", 00:20:23.707 "adrfam": "IPv4", 00:20:23.707 "traddr": "10.0.0.2", 00:20:23.707 "trsvcid": "4420" 00:20:23.707 }, 00:20:23.707 "peer_address": { 00:20:23.707 "trtype": "TCP", 00:20:23.707 "adrfam": "IPv4", 00:20:23.707 "traddr": "10.0.0.1", 00:20:23.707 "trsvcid": "54524" 00:20:23.707 }, 00:20:23.707 "auth": { 00:20:23.707 "state": "completed", 00:20:23.707 "digest": "sha512", 00:20:23.707 "dhgroup": "ffdhe4096" 00:20:23.707 } 00:20:23.707 } 00:20:23.707 ]' 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.707 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.966 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.966 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.966 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.966 09:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:20:24.902 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.902 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.902 09:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.902 09:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.161 09:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.161 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.161 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.161 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:25.161 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.419 09:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.020 00:20:26.020 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.020 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.020 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.278 { 00:20:26.278 "cntlid": 129, 00:20:26.278 "qid": 0, 00:20:26.278 "state": "enabled", 00:20:26.278 "thread": "nvmf_tgt_poll_group_000", 00:20:26.278 "listen_address": { 00:20:26.278 "trtype": "TCP", 00:20:26.278 "adrfam": "IPv4", 00:20:26.278 "traddr": "10.0.0.2", 00:20:26.278 "trsvcid": "4420" 00:20:26.278 }, 00:20:26.278 "peer_address": { 00:20:26.278 "trtype": "TCP", 00:20:26.278 "adrfam": "IPv4", 00:20:26.278 "traddr": "10.0.0.1", 00:20:26.278 "trsvcid": "54460" 00:20:26.278 }, 00:20:26.278 "auth": { 00:20:26.278 "state": "completed", 00:20:26.278 "digest": "sha512", 00:20:26.278 "dhgroup": "ffdhe6144" 00:20:26.278 } 00:20:26.278 } 00:20:26.278 ]' 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.278 09:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.536 09:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:20:27.472 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.472 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.472 09:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.472 09:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 09:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.472 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.472 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.472 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.730 09:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.297 00:20:28.297 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.297 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.297 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.556 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.556 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.556 09:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.556 09:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.556 09:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.556 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.556 { 00:20:28.556 "cntlid": 131, 00:20:28.556 "qid": 0, 00:20:28.556 "state": "enabled", 00:20:28.556 "thread": "nvmf_tgt_poll_group_000", 00:20:28.556 "listen_address": { 00:20:28.556 "trtype": "TCP", 00:20:28.556 "adrfam": "IPv4", 00:20:28.556 "traddr": "10.0.0.2", 00:20:28.556 "trsvcid": "4420" 00:20:28.556 }, 00:20:28.556 "peer_address": { 00:20:28.556 "trtype": "TCP", 00:20:28.556 "adrfam": "IPv4", 00:20:28.556 "traddr": "10.0.0.1", 00:20:28.556 "trsvcid": "54486" 00:20:28.556 }, 00:20:28.556 "auth": { 00:20:28.556 "state": "completed", 00:20:28.556 "digest": "sha512", 00:20:28.556 "dhgroup": "ffdhe6144" 00:20:28.556 } 00:20:28.556 } 00:20:28.556 ]' 00:20:28.556 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.814 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.814 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.814 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.814 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.814 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.814 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.814 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.072 09:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:20:30.009 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.009 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.009 09:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.009 09:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.009 09:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.009 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.009 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:30.009 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.267 09:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.835 00:20:30.835 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.835 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.835 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.093 { 00:20:31.093 "cntlid": 133, 00:20:31.093 "qid": 0, 00:20:31.093 "state": "enabled", 00:20:31.093 "thread": "nvmf_tgt_poll_group_000", 00:20:31.093 "listen_address": { 00:20:31.093 "trtype": "TCP", 00:20:31.093 "adrfam": "IPv4", 00:20:31.093 "traddr": "10.0.0.2", 00:20:31.093 "trsvcid": "4420" 00:20:31.093 }, 00:20:31.093 "peer_address": { 00:20:31.093 "trtype": "TCP", 00:20:31.093 "adrfam": "IPv4", 00:20:31.093 "traddr": "10.0.0.1", 00:20:31.093 "trsvcid": "54522" 00:20:31.093 }, 00:20:31.093 "auth": { 00:20:31.093 "state": "completed", 00:20:31.093 "digest": "sha512", 00:20:31.093 "dhgroup": "ffdhe6144" 00:20:31.093 } 00:20:31.093 } 00:20:31.093 ]' 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.093 09:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.351 09:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.730 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.296 00:20:33.296 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.296 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.296 09:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.555 { 00:20:33.555 "cntlid": 135, 00:20:33.555 "qid": 0, 00:20:33.555 "state": "enabled", 00:20:33.555 "thread": "nvmf_tgt_poll_group_000", 00:20:33.555 "listen_address": { 00:20:33.555 "trtype": "TCP", 00:20:33.555 "adrfam": "IPv4", 00:20:33.555 "traddr": "10.0.0.2", 00:20:33.555 "trsvcid": "4420" 00:20:33.555 }, 00:20:33.555 "peer_address": { 00:20:33.555 "trtype": "TCP", 00:20:33.555 "adrfam": "IPv4", 00:20:33.555 "traddr": "10.0.0.1", 00:20:33.555 "trsvcid": "54546" 00:20:33.555 }, 00:20:33.555 "auth": { 00:20:33.555 "state": "completed", 00:20:33.555 "digest": "sha512", 00:20:33.555 "dhgroup": "ffdhe6144" 00:20:33.555 } 00:20:33.555 } 00:20:33.555 ]' 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.555 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.814 09:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:20:34.750 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.008 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.008 09:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.008 09:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.008 09:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.008 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.008 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.008 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:35.008 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:35.268 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:35.268 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.268 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.268 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:35.268 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.268 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.268 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.268 09:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.268 09:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.268 09:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.269 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.269 09:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.207 00:20:36.207 09:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.207 09:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.207 09:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.207 09:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.207 09:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.207 09:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.207 09:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.207 09:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.207 09:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.207 { 00:20:36.207 "cntlid": 137, 00:20:36.207 "qid": 0, 00:20:36.207 "state": "enabled", 00:20:36.207 "thread": "nvmf_tgt_poll_group_000", 00:20:36.207 "listen_address": { 00:20:36.207 "trtype": "TCP", 00:20:36.207 "adrfam": "IPv4", 00:20:36.207 "traddr": "10.0.0.2", 00:20:36.207 "trsvcid": "4420" 00:20:36.207 }, 00:20:36.207 "peer_address": { 00:20:36.207 "trtype": "TCP", 00:20:36.207 "adrfam": "IPv4", 00:20:36.207 "traddr": "10.0.0.1", 00:20:36.207 "trsvcid": "59424" 00:20:36.207 }, 00:20:36.207 "auth": { 00:20:36.207 "state": "completed", 00:20:36.207 "digest": "sha512", 00:20:36.207 "dhgroup": "ffdhe8192" 00:20:36.207 } 00:20:36.207 } 00:20:36.207 ]' 00:20:36.207 09:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.464 09:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.464 09:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.464 09:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.464 09:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.464 09:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.464 09:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.464 09:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.720 09:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:20:37.655 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.655 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.655 09:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.655 09:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.655 09:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.655 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.655 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:37.655 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:37.912 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:37.912 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.912 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.912 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:37.912 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.912 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.913 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.913 09:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.913 09:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.913 09:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.913 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.913 09:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.849 00:20:38.850 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.850 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.850 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.850 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.850 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.850 09:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.850 09:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.850 09:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.850 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.850 { 00:20:38.850 "cntlid": 139, 00:20:38.850 "qid": 0, 00:20:38.850 "state": "enabled", 00:20:38.850 "thread": "nvmf_tgt_poll_group_000", 00:20:38.850 "listen_address": { 00:20:38.850 "trtype": "TCP", 00:20:38.850 "adrfam": "IPv4", 00:20:38.850 "traddr": "10.0.0.2", 00:20:38.850 "trsvcid": "4420" 00:20:38.850 }, 00:20:38.850 "peer_address": { 00:20:38.850 "trtype": "TCP", 00:20:38.850 "adrfam": "IPv4", 00:20:38.850 "traddr": "10.0.0.1", 00:20:38.850 "trsvcid": "59438" 00:20:38.850 }, 00:20:38.850 "auth": { 00:20:38.850 "state": "completed", 00:20:38.850 "digest": "sha512", 00:20:38.850 "dhgroup": "ffdhe8192" 00:20:38.850 } 00:20:38.850 } 00:20:38.850 ]' 00:20:38.850 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.107 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.107 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.107 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.107 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.107 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.107 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.107 09:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.365 09:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjY5MTQ0NzBiODRiMzMwZDk0NDg1ODBiYTQwZGU3YTbmrPWt: --dhchap-ctrl-secret DHHC-1:02:ZDZhNWQ2M2M4M2I4MjkwZjZmZGVlNmQ1NmFlYjE3ZmE2MDMzYjdkOWJhZThiMmRlZUxbuA==: 00:20:40.299 09:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.299 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.299 09:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.299 09:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.299 09:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.299 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.299 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.299 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.557 09:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.580 00:20:41.580 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.580 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.580 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.838 { 00:20:41.838 "cntlid": 141, 00:20:41.838 "qid": 0, 00:20:41.838 "state": "enabled", 00:20:41.838 "thread": "nvmf_tgt_poll_group_000", 00:20:41.838 "listen_address": { 00:20:41.838 "trtype": "TCP", 00:20:41.838 "adrfam": "IPv4", 00:20:41.838 "traddr": "10.0.0.2", 00:20:41.838 "trsvcid": "4420" 00:20:41.838 }, 00:20:41.838 "peer_address": { 00:20:41.838 "trtype": "TCP", 00:20:41.838 "adrfam": "IPv4", 00:20:41.838 "traddr": "10.0.0.1", 00:20:41.838 "trsvcid": "59462" 00:20:41.838 }, 00:20:41.838 "auth": { 00:20:41.838 "state": "completed", 00:20:41.838 "digest": "sha512", 00:20:41.838 "dhgroup": "ffdhe8192" 00:20:41.838 } 00:20:41.838 } 00:20:41.838 ]' 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.838 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.095 09:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzFlM2QwM2E5M2ZhODA1YzkwYTZkMGY2ZWUwOTY5NzI3ODE2ZGU5NDgyNmIxMGQw5g/R1Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZiN2I5ZTQxMDllN2UyOTg2ZDI0ZDk0OTMyMDdlNmbIITEQ: 00:20:43.032 09:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.032 09:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.032 09:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.032 09:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.032 09:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.032 09:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.032 09:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:43.032 09:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.291 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.226 00:20:44.226 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.226 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.226 09:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.483 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.483 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.483 09:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.484 { 00:20:44.484 "cntlid": 143, 00:20:44.484 "qid": 0, 00:20:44.484 "state": "enabled", 00:20:44.484 "thread": "nvmf_tgt_poll_group_000", 00:20:44.484 "listen_address": { 00:20:44.484 "trtype": "TCP", 00:20:44.484 "adrfam": "IPv4", 00:20:44.484 "traddr": "10.0.0.2", 00:20:44.484 "trsvcid": "4420" 00:20:44.484 }, 00:20:44.484 "peer_address": { 00:20:44.484 "trtype": "TCP", 00:20:44.484 "adrfam": "IPv4", 00:20:44.484 "traddr": "10.0.0.1", 00:20:44.484 "trsvcid": "59480" 00:20:44.484 }, 00:20:44.484 "auth": { 00:20:44.484 "state": "completed", 00:20:44.484 "digest": "sha512", 00:20:44.484 "dhgroup": "ffdhe8192" 00:20:44.484 } 00:20:44.484 } 00:20:44.484 ]' 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.484 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.742 09:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:20:45.675 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.935 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.935 09:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.935 09:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.935 09:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.935 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:45.935 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:45.935 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:45.935 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:45.935 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:45.935 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.196 09:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.131 00:20:47.131 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.131 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.131 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.131 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.131 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.131 09:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.131 09:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.131 09:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.131 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.131 { 00:20:47.131 "cntlid": 145, 00:20:47.131 "qid": 0, 00:20:47.131 "state": "enabled", 00:20:47.131 "thread": "nvmf_tgt_poll_group_000", 00:20:47.131 "listen_address": { 00:20:47.131 "trtype": "TCP", 00:20:47.131 "adrfam": "IPv4", 00:20:47.131 "traddr": "10.0.0.2", 00:20:47.131 "trsvcid": "4420" 00:20:47.131 }, 00:20:47.131 "peer_address": { 00:20:47.131 "trtype": "TCP", 00:20:47.131 "adrfam": "IPv4", 00:20:47.131 "traddr": "10.0.0.1", 00:20:47.131 "trsvcid": "36456" 00:20:47.131 }, 00:20:47.131 "auth": { 00:20:47.131 "state": "completed", 00:20:47.131 "digest": "sha512", 00:20:47.131 "dhgroup": "ffdhe8192" 00:20:47.131 } 00:20:47.131 } 00:20:47.131 ]' 00:20:47.131 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.389 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.389 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.389 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.389 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.389 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.389 09:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.390 09:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.647 09:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MDVhOWRmNTE5NmJjY2QwMGE0ZWYyNjA3ZjFmZTQ0MmFjNmRlZTI4YjZjYzg3ODc3iuGlzA==: --dhchap-ctrl-secret DHHC-1:03:NWMwZjViZDYzYjkzMmMzOWM0MzIwMmI3ZjM2YTI1NDA4NzZlZTJlZjdlODE2NDAwNzJiYzY2NGE0MDZjYmYzYqz81Y0=: 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:48.585 09:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:49.520 request: 00:20:49.520 { 00:20:49.520 "name": "nvme0", 00:20:49.520 "trtype": "tcp", 00:20:49.520 "traddr": "10.0.0.2", 00:20:49.520 "adrfam": "ipv4", 00:20:49.520 "trsvcid": "4420", 00:20:49.520 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:49.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:49.520 "prchk_reftag": false, 00:20:49.520 "prchk_guard": false, 00:20:49.520 "hdgst": false, 00:20:49.520 "ddgst": false, 00:20:49.520 "dhchap_key": "key2", 00:20:49.520 "method": "bdev_nvme_attach_controller", 00:20:49.520 "req_id": 1 00:20:49.520 } 00:20:49.520 Got JSON-RPC error response 00:20:49.520 response: 00:20:49.520 { 00:20:49.520 "code": -5, 00:20:49.520 "message": "Input/output error" 00:20:49.520 } 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:49.520 09:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:50.457 request: 00:20:50.457 { 00:20:50.457 "name": "nvme0", 00:20:50.457 "trtype": "tcp", 00:20:50.457 "traddr": "10.0.0.2", 00:20:50.457 "adrfam": "ipv4", 00:20:50.457 "trsvcid": "4420", 00:20:50.457 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:50.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.457 "prchk_reftag": false, 00:20:50.457 "prchk_guard": false, 00:20:50.457 "hdgst": false, 00:20:50.457 "ddgst": false, 00:20:50.457 "dhchap_key": "key1", 00:20:50.457 "dhchap_ctrlr_key": "ckey2", 00:20:50.457 "method": "bdev_nvme_attach_controller", 00:20:50.457 "req_id": 1 00:20:50.457 } 00:20:50.457 Got JSON-RPC error response 00:20:50.457 response: 00:20:50.457 { 00:20:50.457 "code": -5, 00:20:50.457 "message": "Input/output error" 00:20:50.457 } 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.457 09:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.392 request: 00:20:51.392 { 00:20:51.392 "name": "nvme0", 00:20:51.392 "trtype": "tcp", 00:20:51.392 "traddr": "10.0.0.2", 00:20:51.392 "adrfam": "ipv4", 00:20:51.392 "trsvcid": "4420", 00:20:51.392 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:51.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.392 "prchk_reftag": false, 00:20:51.392 "prchk_guard": false, 00:20:51.392 "hdgst": false, 00:20:51.392 "ddgst": false, 00:20:51.392 "dhchap_key": "key1", 00:20:51.392 "dhchap_ctrlr_key": "ckey1", 00:20:51.392 "method": "bdev_nvme_attach_controller", 00:20:51.392 "req_id": 1 00:20:51.392 } 00:20:51.392 Got JSON-RPC error response 00:20:51.392 response: 00:20:51.392 { 00:20:51.392 "code": -5, 00:20:51.392 "message": "Input/output error" 00:20:51.392 } 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1907609 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1907609 ']' 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1907609 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1907609 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1907609' 00:20:51.392 killing process with pid 1907609 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1907609 00:20:51.392 09:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1907609 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1930103 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1930103 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1930103 ']' 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.392 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.651 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.651 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:51.651 09:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.651 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.651 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.908 09:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.908 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:51.908 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1930103 00:20:51.908 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1930103 ']' 00:20:51.908 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.908 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.908 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.908 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.908 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.165 09:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.098 00:20:53.098 09:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.098 09:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.098 09:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.407 09:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.407 09:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.407 09:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.407 09:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.407 09:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.407 09:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.407 { 00:20:53.407 "cntlid": 1, 00:20:53.407 "qid": 0, 00:20:53.407 "state": "enabled", 00:20:53.407 "thread": "nvmf_tgt_poll_group_000", 00:20:53.407 "listen_address": { 00:20:53.407 "trtype": "TCP", 00:20:53.407 "adrfam": "IPv4", 00:20:53.407 "traddr": "10.0.0.2", 00:20:53.407 "trsvcid": "4420" 00:20:53.407 }, 00:20:53.407 "peer_address": { 00:20:53.407 "trtype": "TCP", 00:20:53.407 "adrfam": "IPv4", 00:20:53.407 "traddr": "10.0.0.1", 00:20:53.407 "trsvcid": "36514" 00:20:53.407 }, 00:20:53.407 "auth": { 00:20:53.407 "state": "completed", 00:20:53.407 "digest": "sha512", 00:20:53.407 "dhgroup": "ffdhe8192" 00:20:53.407 } 00:20:53.407 } 00:20:53.407 ]' 00:20:53.407 09:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.407 09:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.407 09:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.407 09:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.407 09:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.407 09:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.407 09:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.407 09:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.664 09:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODRhZGZiMGQyNTkxNWM5N2E3NDI2YjQ4OGQwZTVhNjljMWE4MTNiZGQ4NWZkYTlmODZiOGM1MmU3MGJlNzJmZMCAjiI=: 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:54.597 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:54.854 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.854 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:54.854 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.854 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:54.854 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.854 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:54.854 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.854 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.854 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.111 request: 00:20:55.111 { 00:20:55.111 "name": "nvme0", 00:20:55.111 "trtype": "tcp", 00:20:55.111 "traddr": "10.0.0.2", 00:20:55.111 "adrfam": "ipv4", 00:20:55.111 "trsvcid": "4420", 00:20:55.111 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:55.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.111 "prchk_reftag": false, 00:20:55.111 "prchk_guard": false, 00:20:55.111 "hdgst": false, 00:20:55.111 "ddgst": false, 00:20:55.111 "dhchap_key": "key3", 00:20:55.111 "method": "bdev_nvme_attach_controller", 00:20:55.111 "req_id": 1 00:20:55.111 } 00:20:55.111 Got JSON-RPC error response 00:20:55.111 response: 00:20:55.111 { 00:20:55.111 "code": -5, 00:20:55.111 "message": "Input/output error" 00:20:55.111 } 00:20:55.111 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:55.111 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.111 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.111 09:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.111 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:55.111 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:55.111 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:55.111 09:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:55.369 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.369 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:55.369 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.369 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:55.369 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.369 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:55.369 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.369 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.369 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.627 request: 00:20:55.627 { 00:20:55.627 "name": "nvme0", 00:20:55.627 "trtype": "tcp", 00:20:55.627 "traddr": "10.0.0.2", 00:20:55.627 "adrfam": "ipv4", 00:20:55.627 "trsvcid": "4420", 00:20:55.627 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:55.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.627 "prchk_reftag": false, 00:20:55.627 "prchk_guard": false, 00:20:55.627 "hdgst": false, 00:20:55.627 "ddgst": false, 00:20:55.627 "dhchap_key": "key3", 00:20:55.627 "method": "bdev_nvme_attach_controller", 00:20:55.627 "req_id": 1 00:20:55.627 } 00:20:55.627 Got JSON-RPC error response 00:20:55.627 response: 00:20:55.627 { 00:20:55.627 "code": -5, 00:20:55.627 "message": "Input/output error" 00:20:55.627 } 00:20:55.627 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:55.627 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.627 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.627 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.627 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:55.627 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:55.627 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:55.627 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:55.627 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:55.627 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:55.886 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:56.144 request: 00:20:56.144 { 00:20:56.144 "name": "nvme0", 00:20:56.144 "trtype": "tcp", 00:20:56.144 "traddr": "10.0.0.2", 00:20:56.144 "adrfam": "ipv4", 00:20:56.144 "trsvcid": "4420", 00:20:56.144 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:56.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.144 "prchk_reftag": false, 00:20:56.144 "prchk_guard": false, 00:20:56.144 "hdgst": false, 00:20:56.144 "ddgst": false, 00:20:56.144 "dhchap_key": "key0", 00:20:56.144 "dhchap_ctrlr_key": "key1", 00:20:56.144 "method": "bdev_nvme_attach_controller", 00:20:56.144 "req_id": 1 00:20:56.144 } 00:20:56.144 Got JSON-RPC error response 00:20:56.144 response: 00:20:56.144 { 00:20:56.144 "code": -5, 00:20:56.144 "message": "Input/output error" 00:20:56.144 } 00:20:56.144 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:56.144 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:56.144 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:56.144 09:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:56.144 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:56.144 09:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:56.402 00:20:56.402 09:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:56.402 09:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:56.402 09:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.680 09:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.680 09:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.680 09:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.955 09:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:56.955 09:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:56.955 09:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1907644 00:20:56.955 09:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1907644 ']' 00:20:56.955 09:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1907644 00:20:56.955 09:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:56.955 09:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.955 09:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1907644 00:20:57.214 09:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:57.214 09:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:57.214 09:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1907644' 00:20:57.214 killing process with pid 1907644 00:20:57.214 09:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1907644 00:20:57.214 09:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1907644 00:20:57.474 09:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:57.474 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.474 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.475 rmmod nvme_tcp 00:20:57.475 rmmod nvme_fabrics 00:20:57.475 rmmod nvme_keyring 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1930103 ']' 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1930103 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1930103 ']' 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1930103 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1930103 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1930103' 00:20:57.475 killing process with pid 1930103 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1930103 00:20:57.475 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1930103 00:20:57.734 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.734 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.734 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.734 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.734 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.734 09:54:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.734 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.734 09:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.275 09:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:00.275 09:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uaW /tmp/spdk.key-sha256.qQi /tmp/spdk.key-sha384.qhe /tmp/spdk.key-sha512.H29 /tmp/spdk.key-sha512.UZ6 /tmp/spdk.key-sha384.iLl /tmp/spdk.key-sha256.den '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:00.275 00:21:00.275 real 3m8.806s 00:21:00.275 user 7m20.098s 00:21:00.275 sys 0m24.847s 00:21:00.275 09:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.275 09:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.275 ************************************ 00:21:00.275 END TEST nvmf_auth_target 00:21:00.275 ************************************ 00:21:00.275 09:54:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:00.275 09:54:16 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:00.275 09:54:16 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:00.275 09:54:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:00.275 09:54:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.275 09:54:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:00.275 ************************************ 00:21:00.275 START TEST nvmf_bdevio_no_huge 00:21:00.275 ************************************ 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:00.275 * Looking for test storage... 00:21:00.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.275 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.276 09:54:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:02.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:02.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:02.178 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:02.178 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:02.178 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:21:02.179 00:21:02.179 --- 10.0.0.2 ping statistics --- 00:21:02.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.179 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:21:02.179 00:21:02.179 --- 10.0.0.1 ping statistics --- 00:21:02.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.179 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1932854 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1932854 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1932854 ']' 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.179 09:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.179 [2024-07-15 09:54:18.793007] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:02.179 [2024-07-15 09:54:18.793109] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:02.179 [2024-07-15 09:54:18.844626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:02.179 [2024-07-15 09:54:18.863742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.179 [2024-07-15 09:54:18.945216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.179 [2024-07-15 09:54:18.945272] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.179 [2024-07-15 09:54:18.945300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.179 [2024-07-15 09:54:18.945312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.179 [2024-07-15 09:54:18.945322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.179 [2024-07-15 09:54:18.945409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:02.179 [2024-07-15 09:54:18.945478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:02.179 [2024-07-15 09:54:18.945534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:02.179 [2024-07-15 09:54:18.945538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.439 [2024-07-15 09:54:19.066317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.439 Malloc0 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.439 [2024-07-15 09:54:19.104186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:02.439 { 00:21:02.439 "params": { 00:21:02.439 "name": "Nvme$subsystem", 00:21:02.439 "trtype": "$TEST_TRANSPORT", 00:21:02.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.439 "adrfam": "ipv4", 00:21:02.439 "trsvcid": "$NVMF_PORT", 00:21:02.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.439 "hdgst": ${hdgst:-false}, 00:21:02.439 "ddgst": ${ddgst:-false} 00:21:02.439 }, 00:21:02.439 "method": "bdev_nvme_attach_controller" 00:21:02.439 } 00:21:02.439 EOF 00:21:02.439 )") 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:02.439 09:54:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:02.439 "params": { 00:21:02.439 "name": "Nvme1", 00:21:02.439 "trtype": "tcp", 00:21:02.439 "traddr": "10.0.0.2", 00:21:02.439 "adrfam": "ipv4", 00:21:02.439 "trsvcid": "4420", 00:21:02.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.439 "hdgst": false, 00:21:02.439 "ddgst": false 00:21:02.439 }, 00:21:02.439 "method": "bdev_nvme_attach_controller" 00:21:02.439 }' 00:21:02.439 [2024-07-15 09:54:19.154565] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:02.439 [2024-07-15 09:54:19.154640] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1932913 ] 00:21:02.439 [2024-07-15 09:54:19.198737] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:02.439 [2024-07-15 09:54:19.219220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:02.698 [2024-07-15 09:54:19.306702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.698 [2024-07-15 09:54:19.306753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.698 [2024-07-15 09:54:19.306755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.955 I/O targets: 00:21:02.955 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:02.955 00:21:02.955 00:21:02.955 CUnit - A unit testing framework for C - Version 2.1-3 00:21:02.955 http://cunit.sourceforge.net/ 00:21:02.955 00:21:02.955 00:21:02.955 Suite: bdevio tests on: Nvme1n1 00:21:02.955 Test: blockdev write read block ...passed 00:21:02.955 Test: blockdev write zeroes read block ...passed 00:21:02.955 Test: blockdev write zeroes read no split ...passed 00:21:03.212 Test: blockdev write zeroes read split ...passed 00:21:03.212 Test: blockdev write zeroes read split partial ...passed 00:21:03.212 Test: blockdev reset ...[2024-07-15 09:54:19.832263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:03.212 [2024-07-15 09:54:19.832371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21330 (9): Bad file descriptor 00:21:03.212 [2024-07-15 09:54:19.889184] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:03.212 passed 00:21:03.212 Test: blockdev write read 8 blocks ...passed 00:21:03.212 Test: blockdev write read size > 128k ...passed 00:21:03.212 Test: blockdev write read invalid size ...passed 00:21:03.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:03.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:03.212 Test: blockdev write read max offset ...passed 00:21:03.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:03.470 Test: blockdev writev readv 8 blocks ...passed 00:21:03.470 Test: blockdev writev readv 30 x 1block ...passed 00:21:03.470 Test: blockdev writev readv block ...passed 00:21:03.471 Test: blockdev writev readv size > 128k ...passed 00:21:03.471 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:03.471 Test: blockdev comparev and writev ...[2024-07-15 09:54:20.143155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.471 [2024-07-15 09:54:20.143211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.471 [2024-07-15 09:54:20.143235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.471 [2024-07-15 09:54:20.143254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:03.471 [2024-07-15 09:54:20.143648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.471 [2024-07-15 09:54:20.143673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:03.471 [2024-07-15 09:54:20.143694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.471 [2024-07-15 09:54:20.143710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:03.471 [2024-07-15 09:54:20.144097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.471 [2024-07-15 09:54:20.144122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:03.471 [2024-07-15 09:54:20.144143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.471 [2024-07-15 09:54:20.144168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:03.471 [2024-07-15 09:54:20.144559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.471 [2024-07-15 09:54:20.144588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:03.471 [2024-07-15 09:54:20.144610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:03.471 [2024-07-15 09:54:20.144626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:03.471 passed 00:21:03.471 Test: blockdev nvme passthru rw ...passed 00:21:03.471 Test: blockdev nvme passthru vendor specific ...[2024-07-15 09:54:20.227234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:03.471 [2024-07-15 09:54:20.227261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:03.471 [2024-07-15 09:54:20.227435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:03.471 [2024-07-15 09:54:20.227457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:03.471 [2024-07-15 09:54:20.227639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:03.471 [2024-07-15 09:54:20.227661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:03.471 [2024-07-15 09:54:20.227837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:03.471 [2024-07-15 09:54:20.227860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:03.471 passed 00:21:03.471 Test: blockdev nvme admin passthru ...passed 00:21:03.730 Test: blockdev copy ...passed 00:21:03.730 00:21:03.730 Run Summary: Type Total Ran Passed Failed Inactive 00:21:03.730 suites 1 1 n/a 0 0 00:21:03.730 tests 23 23 23 0 0 00:21:03.730 asserts 152 152 152 0 n/a 00:21:03.730 00:21:03.730 Elapsed time = 1.323 seconds 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:03.991 rmmod nvme_tcp 00:21:03.991 rmmod nvme_fabrics 00:21:03.991 rmmod nvme_keyring 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1932854 ']' 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1932854 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1932854 ']' 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1932854 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1932854 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1932854' 00:21:03.991 killing process with pid 1932854 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1932854 00:21:03.991 09:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1932854 00:21:04.560 09:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:04.560 09:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.560 09:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.560 09:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.560 09:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.560 09:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.560 09:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.560 09:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.469 09:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:06.469 00:21:06.469 real 0m6.587s 00:21:06.469 user 0m11.516s 00:21:06.469 sys 0m2.530s 00:21:06.469 09:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:06.469 09:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:06.469 ************************************ 00:21:06.469 END TEST nvmf_bdevio_no_huge 00:21:06.469 ************************************ 00:21:06.469 09:54:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:06.469 09:54:23 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:06.469 09:54:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:06.469 09:54:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.469 09:54:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:06.469 ************************************ 00:21:06.469 START TEST nvmf_tls 00:21:06.469 ************************************ 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:06.469 * Looking for test storage... 00:21:06.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.469 09:54:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:08.999 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:08.999 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:08.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:08.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:08.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:21:08.999 00:21:08.999 --- 10.0.0.2 ping statistics --- 00:21:08.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.999 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:21:08.999 00:21:08.999 --- 10.0.0.1 ping statistics --- 00:21:08.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.999 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1935070 00:21:08.999 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1935070 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1935070 ']' 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.000 [2024-07-15 09:54:25.403024] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:09.000 [2024-07-15 09:54:25.403098] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.000 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.000 [2024-07-15 09:54:25.442080] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:09.000 [2024-07-15 09:54:25.468665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.000 [2024-07-15 09:54:25.553803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.000 [2024-07-15 09:54:25.553858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.000 [2024-07-15 09:54:25.553887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.000 [2024-07-15 09:54:25.553906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.000 [2024-07-15 09:54:25.553928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.000 [2024-07-15 09:54:25.553976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:09.000 09:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:09.258 true 00:21:09.258 09:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:09.258 09:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:09.515 09:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:09.515 09:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:09.515 09:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:09.773 09:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:09.773 09:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:10.031 09:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:10.031 09:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:10.031 09:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:10.289 09:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:10.290 09:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:10.548 09:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:10.548 09:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:10.548 09:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:10.548 09:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:10.806 09:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:10.806 09:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:10.806 09:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:11.065 09:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:11.065 09:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:11.325 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:11.325 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:11.325 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:11.584 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:11.584 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:11.842 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:11.842 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:11.842 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:11.842 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:11.842 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:11.842 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.phQFM9ffaP 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.c1h7sIslt9 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.phQFM9ffaP 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.c1h7sIslt9 00:21:11.843 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:12.101 09:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:12.670 09:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.phQFM9ffaP 00:21:12.670 09:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.phQFM9ffaP 00:21:12.670 09:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:12.670 [2024-07-15 09:54:29.450669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.930 09:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:12.930 09:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:13.188 [2024-07-15 09:54:29.939962] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.188 [2024-07-15 09:54:29.940247] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.188 09:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:13.789 malloc0 00:21:13.789 09:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:14.047 09:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.phQFM9ffaP 00:21:14.306 [2024-07-15 09:54:30.837947] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:14.306 09:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.phQFM9ffaP 00:21:14.306 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.286 Initializing NVMe Controllers 00:21:24.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:24.286 Initialization complete. Launching workers. 00:21:24.286 ======================================================== 00:21:24.286 Latency(us) 00:21:24.286 Device Information : IOPS MiB/s Average min max 00:21:24.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7695.66 30.06 8318.64 1351.51 11465.13 00:21:24.286 ======================================================== 00:21:24.286 Total : 7695.66 30.06 8318.64 1351.51 11465.13 00:21:24.286 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.phQFM9ffaP 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.phQFM9ffaP' 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1936955 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1936955 /var/tmp/bdevperf.sock 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1936955 ']' 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.286 09:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.286 [2024-07-15 09:54:41.010100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:24.286 [2024-07-15 09:54:41.010197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1936955 ] 00:21:24.286 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.286 [2024-07-15 09:54:41.041041] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:24.286 [2024-07-15 09:54:41.068560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.544 [2024-07-15 09:54:41.154449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.544 09:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.544 09:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:24.544 09:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.phQFM9ffaP 00:21:24.803 [2024-07-15 09:54:41.483648] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.803 [2024-07-15 09:54:41.483751] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:24.803 TLSTESTn1 00:21:24.803 09:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:25.062 Running I/O for 10 seconds... 00:21:35.038 00:21:35.038 Latency(us) 00:21:35.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.038 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:35.038 Verification LBA range: start 0x0 length 0x2000 00:21:35.038 TLSTESTn1 : 10.04 3374.05 13.18 0.00 0.00 37845.80 5873.97 71070.15 00:21:35.038 =================================================================================================================== 00:21:35.038 Total : 3374.05 13.18 0.00 0.00 37845.80 5873.97 71070.15 00:21:35.038 0 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1936955 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1936955 ']' 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1936955 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1936955 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1936955' 00:21:35.038 killing process with pid 1936955 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1936955 00:21:35.038 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.038 00:21:35.038 Latency(us) 00:21:35.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.038 =================================================================================================================== 00:21:35.038 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.038 [2024-07-15 09:54:51.796668] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:35.038 09:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1936955 00:21:35.296 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c1h7sIslt9 00:21:35.296 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:35.296 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c1h7sIslt9 00:21:35.296 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:35.296 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.296 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:35.296 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.296 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c1h7sIslt9 00:21:35.296 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.c1h7sIslt9' 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1938154 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1938154 /var/tmp/bdevperf.sock 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1938154 ']' 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.297 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.297 [2024-07-15 09:54:52.073253] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:35.297 [2024-07-15 09:54:52.073332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938154 ] 00:21:35.555 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.555 [2024-07-15 09:54:52.105329] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:35.555 [2024-07-15 09:54:52.133498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.555 [2024-07-15 09:54:52.218956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.555 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.555 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:35.555 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.c1h7sIslt9 00:21:36.121 [2024-07-15 09:54:52.604843] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.121 [2024-07-15 09:54:52.604992] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:36.121 [2024-07-15 09:54:52.615208] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:36.121 [2024-07-15 09:54:52.615884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c8d0 (107): Transport endpoint is not connected 00:21:36.121 [2024-07-15 09:54:52.616856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c8d0 (9): Bad file descriptor 00:21:36.121 [2024-07-15 09:54:52.617855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.121 [2024-07-15 09:54:52.617895] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:36.121 [2024-07-15 09:54:52.617912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.121 request: 00:21:36.121 { 00:21:36.121 "name": "TLSTEST", 00:21:36.121 "trtype": "tcp", 00:21:36.121 "traddr": "10.0.0.2", 00:21:36.121 "adrfam": "ipv4", 00:21:36.121 "trsvcid": "4420", 00:21:36.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.121 "prchk_reftag": false, 00:21:36.121 "prchk_guard": false, 00:21:36.121 "hdgst": false, 00:21:36.121 "ddgst": false, 00:21:36.121 "psk": "/tmp/tmp.c1h7sIslt9", 00:21:36.121 "method": "bdev_nvme_attach_controller", 00:21:36.121 "req_id": 1 00:21:36.121 } 00:21:36.121 Got JSON-RPC error response 00:21:36.121 response: 00:21:36.121 { 00:21:36.121 "code": -5, 00:21:36.121 "message": "Input/output error" 00:21:36.121 } 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1938154 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1938154 ']' 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1938154 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1938154 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1938154' 00:21:36.121 killing process with pid 1938154 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1938154 00:21:36.121 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.121 00:21:36.121 Latency(us) 00:21:36.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.121 =================================================================================================================== 00:21:36.121 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:36.121 [2024-07-15 09:54:52.669454] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1938154 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.phQFM9ffaP 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.phQFM9ffaP 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.phQFM9ffaP 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.phQFM9ffaP' 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1938288 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1938288 /var/tmp/bdevperf.sock 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1938288 ']' 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.121 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.122 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.122 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.122 09:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.399 [2024-07-15 09:54:52.931707] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:36.399 [2024-07-15 09:54:52.931788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938288 ] 00:21:36.399 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.399 [2024-07-15 09:54:52.964646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:36.399 [2024-07-15 09:54:52.991904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.399 [2024-07-15 09:54:53.075553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.399 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.399 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:36.399 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.phQFM9ffaP 00:21:36.967 [2024-07-15 09:54:53.454092] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.967 [2024-07-15 09:54:53.454247] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:36.967 [2024-07-15 09:54:53.459569] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:36.967 [2024-07-15 09:54:53.459604] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:36.967 [2024-07-15 09:54:53.459673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:36.967 [2024-07-15 09:54:53.460195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfca8d0 (107): Transport endpoint is not connected 00:21:36.967 [2024-07-15 09:54:53.461182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfca8d0 (9): Bad file descriptor 00:21:36.967 [2024-07-15 09:54:53.462180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.967 [2024-07-15 09:54:53.462202] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:36.967 [2024-07-15 09:54:53.462234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.967 request: 00:21:36.967 { 00:21:36.967 "name": "TLSTEST", 00:21:36.967 "trtype": "tcp", 00:21:36.967 "traddr": "10.0.0.2", 00:21:36.967 "adrfam": "ipv4", 00:21:36.967 "trsvcid": "4420", 00:21:36.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.967 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:36.967 "prchk_reftag": false, 00:21:36.967 "prchk_guard": false, 00:21:36.967 "hdgst": false, 00:21:36.967 "ddgst": false, 00:21:36.967 "psk": "/tmp/tmp.phQFM9ffaP", 00:21:36.967 "method": "bdev_nvme_attach_controller", 00:21:36.967 "req_id": 1 00:21:36.967 } 00:21:36.967 Got JSON-RPC error response 00:21:36.967 response: 00:21:36.967 { 00:21:36.967 "code": -5, 00:21:36.967 "message": "Input/output error" 00:21:36.967 } 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1938288 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1938288 ']' 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1938288 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1938288 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1938288' 00:21:36.967 killing process with pid 1938288 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1938288 00:21:36.967 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.967 00:21:36.967 Latency(us) 00:21:36.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.967 =================================================================================================================== 00:21:36.967 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:36.967 [2024-07-15 09:54:53.514991] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1938288 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.phQFM9ffaP 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.phQFM9ffaP 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.phQFM9ffaP 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.phQFM9ffaP' 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1938426 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1938426 /var/tmp/bdevperf.sock 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1938426 ']' 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.967 09:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.226 [2024-07-15 09:54:53.783128] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:37.226 [2024-07-15 09:54:53.783204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938426 ] 00:21:37.226 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.226 [2024-07-15 09:54:53.813620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:37.226 [2024-07-15 09:54:53.840930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.226 [2024-07-15 09:54:53.925708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.487 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.487 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:37.487 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.phQFM9ffaP 00:21:37.487 [2024-07-15 09:54:54.250435] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.487 [2024-07-15 09:54:54.250545] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:37.487 [2024-07-15 09:54:54.259404] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:37.487 [2024-07-15 09:54:54.259435] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:37.487 [2024-07-15 09:54:54.259504] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:37.487 [2024-07-15 09:54:54.260386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd008d0 (107): Transport endpoint is not connected 00:21:37.487 [2024-07-15 09:54:54.261378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd008d0 (9): Bad file descriptor 00:21:37.487 [2024-07-15 09:54:54.262376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:37.487 [2024-07-15 09:54:54.262396] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:37.487 [2024-07-15 09:54:54.262413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:37.487 request: 00:21:37.487 { 00:21:37.487 "name": "TLSTEST", 00:21:37.487 "trtype": "tcp", 00:21:37.487 "traddr": "10.0.0.2", 00:21:37.487 "adrfam": "ipv4", 00:21:37.487 "trsvcid": "4420", 00:21:37.487 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:37.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:37.487 "prchk_reftag": false, 00:21:37.487 "prchk_guard": false, 00:21:37.487 "hdgst": false, 00:21:37.487 "ddgst": false, 00:21:37.487 "psk": "/tmp/tmp.phQFM9ffaP", 00:21:37.487 "method": "bdev_nvme_attach_controller", 00:21:37.487 "req_id": 1 00:21:37.487 } 00:21:37.487 Got JSON-RPC error response 00:21:37.487 response: 00:21:37.487 { 00:21:37.487 "code": -5, 00:21:37.487 "message": "Input/output error" 00:21:37.487 } 00:21:37.745 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1938426 00:21:37.746 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1938426 ']' 00:21:37.746 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1938426 00:21:37.746 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:37.746 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.746 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1938426 00:21:37.746 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:37.746 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:37.746 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1938426' 00:21:37.746 killing process with pid 1938426 00:21:37.746 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1938426 00:21:37.746 Received shutdown signal, test time was about 10.000000 seconds 00:21:37.746 00:21:37.746 Latency(us) 00:21:37.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.746 =================================================================================================================== 00:21:37.746 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:37.746 [2024-07-15 09:54:54.316285] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:37.746 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1938426 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1938495 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1938495 /var/tmp/bdevperf.sock 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1938495 ']' 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.006 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.006 [2024-07-15 09:54:54.580122] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:38.006 [2024-07-15 09:54:54.580227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938495 ] 00:21:38.006 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.006 [2024-07-15 09:54:54.613728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:38.006 [2024-07-15 09:54:54.641766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.006 [2024-07-15 09:54:54.731812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.264 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.264 09:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:38.264 09:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:38.523 [2024-07-15 09:54:55.121799] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:38.523 [2024-07-15 09:54:55.123677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e10de0 (9): Bad file descriptor 00:21:38.523 [2024-07-15 09:54:55.124671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:38.523 [2024-07-15 09:54:55.124691] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:38.523 [2024-07-15 09:54:55.124707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:38.523 request: 00:21:38.523 { 00:21:38.523 "name": "TLSTEST", 00:21:38.523 "trtype": "tcp", 00:21:38.523 "traddr": "10.0.0.2", 00:21:38.523 "adrfam": "ipv4", 00:21:38.523 "trsvcid": "4420", 00:21:38.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.523 "prchk_reftag": false, 00:21:38.523 "prchk_guard": false, 00:21:38.523 "hdgst": false, 00:21:38.523 "ddgst": false, 00:21:38.523 "method": "bdev_nvme_attach_controller", 00:21:38.523 "req_id": 1 00:21:38.523 } 00:21:38.523 Got JSON-RPC error response 00:21:38.523 response: 00:21:38.523 { 00:21:38.523 "code": -5, 00:21:38.523 "message": "Input/output error" 00:21:38.523 } 00:21:38.523 09:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1938495 00:21:38.523 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1938495 ']' 00:21:38.524 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1938495 00:21:38.524 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:38.524 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.524 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1938495 00:21:38.524 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:38.524 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:38.524 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1938495' 00:21:38.524 killing process with pid 1938495 00:21:38.524 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1938495 00:21:38.524 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.524 00:21:38.524 Latency(us) 00:21:38.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.524 =================================================================================================================== 00:21:38.524 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:38.524 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1938495 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1935070 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1935070 ']' 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1935070 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1935070 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1935070' 00:21:38.783 killing process with pid 1935070 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1935070 00:21:38.783 [2024-07-15 09:54:55.417037] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:38.783 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1935070 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.QridQIgu4A 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.QridQIgu4A 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1938700 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1938700 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1938700 ']' 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.042 09:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.042 [2024-07-15 09:54:55.760811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:39.042 [2024-07-15 09:54:55.760892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.042 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.042 [2024-07-15 09:54:55.796705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:39.042 [2024-07-15 09:54:55.823470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.300 [2024-07-15 09:54:55.907427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.300 [2024-07-15 09:54:55.907482] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.300 [2024-07-15 09:54:55.907501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.300 [2024-07-15 09:54:55.907518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.300 [2024-07-15 09:54:55.907531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.300 [2024-07-15 09:54:55.907562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.300 09:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.300 09:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:39.300 09:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.300 09:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.300 09:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.300 09:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.300 09:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.QridQIgu4A 00:21:39.300 09:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QridQIgu4A 00:21:39.301 09:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:39.559 [2024-07-15 09:54:56.248784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.559 09:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:39.817 09:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:40.076 [2024-07-15 09:54:56.730051] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:40.076 [2024-07-15 09:54:56.730340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.076 09:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:40.334 malloc0 00:21:40.334 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:40.592 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QridQIgu4A 00:21:40.852 [2024-07-15 09:54:57.472524] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QridQIgu4A 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QridQIgu4A' 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1938876 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1938876 /var/tmp/bdevperf.sock 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1938876 ']' 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.852 09:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.852 [2024-07-15 09:54:57.536133] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:40.852 [2024-07-15 09:54:57.536219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938876 ] 00:21:40.852 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.852 [2024-07-15 09:54:57.569495] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:40.852 [2024-07-15 09:54:57.598583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.111 [2024-07-15 09:54:57.692005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.111 09:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.111 09:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:41.111 09:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QridQIgu4A 00:21:41.370 [2024-07-15 09:54:58.045023] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.370 [2024-07-15 09:54:58.045138] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:41.370 TLSTESTn1 00:21:41.370 09:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:41.673 Running I/O for 10 seconds... 00:21:51.654 00:21:51.654 Latency(us) 00:21:51.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.654 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:51.654 Verification LBA range: start 0x0 length 0x2000 00:21:51.654 TLSTESTn1 : 10.04 3291.48 12.86 0.00 0.00 38796.80 5946.79 71070.15 00:21:51.654 =================================================================================================================== 00:21:51.654 Total : 3291.48 12.86 0.00 0.00 38796.80 5946.79 71070.15 00:21:51.654 0 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1938876 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1938876 ']' 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1938876 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1938876 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1938876' 00:21:51.654 killing process with pid 1938876 00:21:51.654 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1938876 00:21:51.654 Received shutdown signal, test time was about 10.000000 seconds 00:21:51.654 00:21:51.654 Latency(us) 00:21:51.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.654 =================================================================================================================== 00:21:51.655 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.655 [2024-07-15 09:55:08.340458] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:51.655 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1938876 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.QridQIgu4A 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QridQIgu4A 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QridQIgu4A 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QridQIgu4A 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QridQIgu4A' 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1940183 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1940183 /var/tmp/bdevperf.sock 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1940183 ']' 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.913 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.914 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.914 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.914 [2024-07-15 09:55:08.618142] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:51.914 [2024-07-15 09:55:08.618219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940183 ] 00:21:51.914 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.914 [2024-07-15 09:55:08.648686] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:51.914 [2024-07-15 09:55:08.675599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.172 [2024-07-15 09:55:08.759457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.172 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.172 09:55:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:52.172 09:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QridQIgu4A 00:21:52.430 [2024-07-15 09:55:09.086574] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.430 [2024-07-15 09:55:09.086657] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:52.430 [2024-07-15 09:55:09.086678] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.QridQIgu4A 00:21:52.430 request: 00:21:52.430 { 00:21:52.430 "name": "TLSTEST", 00:21:52.430 "trtype": "tcp", 00:21:52.430 "traddr": "10.0.0.2", 00:21:52.430 "adrfam": "ipv4", 00:21:52.430 "trsvcid": "4420", 00:21:52.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.430 "prchk_reftag": false, 00:21:52.430 "prchk_guard": false, 00:21:52.430 "hdgst": false, 00:21:52.430 "ddgst": false, 00:21:52.430 "psk": "/tmp/tmp.QridQIgu4A", 00:21:52.430 "method": "bdev_nvme_attach_controller", 00:21:52.430 "req_id": 1 00:21:52.430 } 00:21:52.430 Got JSON-RPC error response 00:21:52.430 response: 00:21:52.430 { 00:21:52.430 "code": -1, 00:21:52.430 "message": "Operation not permitted" 00:21:52.430 } 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1940183 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1940183 ']' 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1940183 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940183 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940183' 00:21:52.430 killing process with pid 1940183 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1940183 00:21:52.430 Received shutdown signal, test time was about 10.000000 seconds 00:21:52.430 00:21:52.430 Latency(us) 00:21:52.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.430 =================================================================================================================== 00:21:52.430 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:52.430 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1940183 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1938700 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1938700 ']' 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1938700 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1938700 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1938700' 00:21:52.688 killing process with pid 1938700 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1938700 00:21:52.688 [2024-07-15 09:55:09.352996] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:52.688 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1938700 00:21:52.946 09:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1940330 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1940330 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1940330 ']' 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.947 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.947 [2024-07-15 09:55:09.649010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:52.947 [2024-07-15 09:55:09.649101] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.947 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.947 [2024-07-15 09:55:09.693508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:52.947 [2024-07-15 09:55:09.725502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.205 [2024-07-15 09:55:09.814784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.205 [2024-07-15 09:55:09.814850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.205 [2024-07-15 09:55:09.814867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.205 [2024-07-15 09:55:09.814889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.205 [2024-07-15 09:55:09.814903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.205 [2024-07-15 09:55:09.814940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.QridQIgu4A 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.QridQIgu4A 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.205 09:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.QridQIgu4A 00:21:53.206 09:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QridQIgu4A 00:21:53.206 09:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:53.464 [2024-07-15 09:55:10.187978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.464 09:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:53.722 09:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:53.980 [2024-07-15 09:55:10.689303] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.980 [2024-07-15 09:55:10.689546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.980 09:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:54.238 malloc0 00:21:54.238 09:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:54.496 09:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QridQIgu4A 00:21:54.754 [2024-07-15 09:55:11.434234] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:54.754 [2024-07-15 09:55:11.434278] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:54.754 [2024-07-15 09:55:11.434316] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:54.754 request: 00:21:54.754 { 00:21:54.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.754 "host": "nqn.2016-06.io.spdk:host1", 00:21:54.754 "psk": "/tmp/tmp.QridQIgu4A", 00:21:54.754 "method": "nvmf_subsystem_add_host", 00:21:54.754 "req_id": 1 00:21:54.754 } 00:21:54.754 Got JSON-RPC error response 00:21:54.754 response: 00:21:54.754 { 00:21:54.754 "code": -32603, 00:21:54.754 "message": "Internal error" 00:21:54.754 } 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1940330 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1940330 ']' 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1940330 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940330 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940330' 00:21:54.754 killing process with pid 1940330 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1940330 00:21:54.754 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1940330 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.QridQIgu4A 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1940627 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1940627 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1940627 ']' 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.013 09:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.013 [2024-07-15 09:55:11.779537] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:55.013 [2024-07-15 09:55:11.779622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.271 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.271 [2024-07-15 09:55:11.817236] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:55.271 [2024-07-15 09:55:11.848919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.271 [2024-07-15 09:55:11.935556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.271 [2024-07-15 09:55:11.935622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.271 [2024-07-15 09:55:11.935638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.271 [2024-07-15 09:55:11.935651] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.271 [2024-07-15 09:55:11.935663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.271 [2024-07-15 09:55:11.935693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.271 09:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.271 09:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:55.272 09:55:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.272 09:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:55.272 09:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.529 09:55:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.529 09:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.QridQIgu4A 00:21:55.529 09:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QridQIgu4A 00:21:55.529 09:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:55.787 [2024-07-15 09:55:12.351831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.787 09:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:56.045 09:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:56.303 [2024-07-15 09:55:12.845134] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.303 [2024-07-15 09:55:12.845412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.303 09:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:56.562 malloc0 00:21:56.563 09:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QridQIgu4A 00:21:56.821 [2024-07-15 09:55:13.575202] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1940787 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1940787 /var/tmp/bdevperf.sock 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1940787 ']' 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.821 09:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.079 [2024-07-15 09:55:13.636822] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:57.079 [2024-07-15 09:55:13.636904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940787 ] 00:21:57.079 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.079 [2024-07-15 09:55:13.668375] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:57.079 [2024-07-15 09:55:13.696685] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.079 [2024-07-15 09:55:13.783372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.337 09:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.337 09:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:57.337 09:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QridQIgu4A 00:21:57.337 [2024-07-15 09:55:14.112813] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.337 [2024-07-15 09:55:14.112985] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:57.593 TLSTESTn1 00:21:57.593 09:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:57.850 09:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:57.850 "subsystems": [ 00:21:57.850 { 00:21:57.850 "subsystem": "keyring", 00:21:57.850 "config": [] 00:21:57.850 }, 00:21:57.850 { 00:21:57.851 "subsystem": "iobuf", 00:21:57.851 "config": [ 00:21:57.851 { 00:21:57.851 "method": "iobuf_set_options", 00:21:57.851 "params": { 00:21:57.851 "small_pool_count": 8192, 00:21:57.851 "large_pool_count": 1024, 00:21:57.851 "small_bufsize": 8192, 00:21:57.851 "large_bufsize": 135168 00:21:57.851 } 00:21:57.851 } 00:21:57.851 ] 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "subsystem": "sock", 00:21:57.851 "config": [ 00:21:57.851 { 00:21:57.851 "method": "sock_set_default_impl", 00:21:57.851 "params": { 00:21:57.851 "impl_name": "posix" 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "sock_impl_set_options", 00:21:57.851 "params": { 00:21:57.851 "impl_name": "ssl", 00:21:57.851 "recv_buf_size": 4096, 00:21:57.851 "send_buf_size": 4096, 00:21:57.851 "enable_recv_pipe": true, 00:21:57.851 "enable_quickack": false, 00:21:57.851 "enable_placement_id": 0, 00:21:57.851 "enable_zerocopy_send_server": true, 00:21:57.851 "enable_zerocopy_send_client": false, 00:21:57.851 "zerocopy_threshold": 0, 00:21:57.851 "tls_version": 0, 00:21:57.851 "enable_ktls": false 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "sock_impl_set_options", 00:21:57.851 "params": { 00:21:57.851 "impl_name": "posix", 00:21:57.851 "recv_buf_size": 2097152, 00:21:57.851 "send_buf_size": 2097152, 00:21:57.851 "enable_recv_pipe": true, 00:21:57.851 "enable_quickack": false, 00:21:57.851 "enable_placement_id": 0, 00:21:57.851 "enable_zerocopy_send_server": true, 00:21:57.851 "enable_zerocopy_send_client": false, 00:21:57.851 "zerocopy_threshold": 0, 00:21:57.851 "tls_version": 0, 00:21:57.851 "enable_ktls": false 00:21:57.851 } 00:21:57.851 } 00:21:57.851 ] 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "subsystem": "vmd", 00:21:57.851 "config": [] 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "subsystem": "accel", 00:21:57.851 "config": [ 00:21:57.851 { 00:21:57.851 "method": "accel_set_options", 00:21:57.851 "params": { 00:21:57.851 "small_cache_size": 128, 00:21:57.851 "large_cache_size": 16, 00:21:57.851 "task_count": 2048, 00:21:57.851 "sequence_count": 2048, 00:21:57.851 "buf_count": 2048 00:21:57.851 } 00:21:57.851 } 00:21:57.851 ] 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "subsystem": "bdev", 00:21:57.851 "config": [ 00:21:57.851 { 00:21:57.851 "method": "bdev_set_options", 00:21:57.851 "params": { 00:21:57.851 "bdev_io_pool_size": 65535, 00:21:57.851 "bdev_io_cache_size": 256, 00:21:57.851 "bdev_auto_examine": true, 00:21:57.851 "iobuf_small_cache_size": 128, 00:21:57.851 "iobuf_large_cache_size": 16 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_raid_set_options", 00:21:57.851 "params": { 00:21:57.851 "process_window_size_kb": 1024 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_iscsi_set_options", 00:21:57.851 "params": { 00:21:57.851 "timeout_sec": 30 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_nvme_set_options", 00:21:57.851 "params": { 00:21:57.851 "action_on_timeout": "none", 00:21:57.851 "timeout_us": 0, 00:21:57.851 "timeout_admin_us": 0, 00:21:57.851 "keep_alive_timeout_ms": 10000, 00:21:57.851 "arbitration_burst": 0, 00:21:57.851 "low_priority_weight": 0, 00:21:57.851 "medium_priority_weight": 0, 00:21:57.851 "high_priority_weight": 0, 00:21:57.851 "nvme_adminq_poll_period_us": 10000, 00:21:57.851 "nvme_ioq_poll_period_us": 0, 00:21:57.851 "io_queue_requests": 0, 00:21:57.851 "delay_cmd_submit": true, 00:21:57.851 "transport_retry_count": 4, 00:21:57.851 "bdev_retry_count": 3, 00:21:57.851 "transport_ack_timeout": 0, 00:21:57.851 "ctrlr_loss_timeout_sec": 0, 00:21:57.851 "reconnect_delay_sec": 0, 00:21:57.851 "fast_io_fail_timeout_sec": 0, 00:21:57.851 "disable_auto_failback": false, 00:21:57.851 "generate_uuids": false, 00:21:57.851 "transport_tos": 0, 00:21:57.851 "nvme_error_stat": false, 00:21:57.851 "rdma_srq_size": 0, 00:21:57.851 "io_path_stat": false, 00:21:57.851 "allow_accel_sequence": false, 00:21:57.851 "rdma_max_cq_size": 0, 00:21:57.851 "rdma_cm_event_timeout_ms": 0, 00:21:57.851 "dhchap_digests": [ 00:21:57.851 "sha256", 00:21:57.851 "sha384", 00:21:57.851 "sha512" 00:21:57.851 ], 00:21:57.851 "dhchap_dhgroups": [ 00:21:57.851 "null", 00:21:57.851 "ffdhe2048", 00:21:57.851 "ffdhe3072", 00:21:57.851 "ffdhe4096", 00:21:57.851 "ffdhe6144", 00:21:57.851 "ffdhe8192" 00:21:57.851 ] 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_nvme_set_hotplug", 00:21:57.851 "params": { 00:21:57.851 "period_us": 100000, 00:21:57.851 "enable": false 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_malloc_create", 00:21:57.851 "params": { 00:21:57.851 "name": "malloc0", 00:21:57.851 "num_blocks": 8192, 00:21:57.851 "block_size": 4096, 00:21:57.851 "physical_block_size": 4096, 00:21:57.851 "uuid": "66177462-bbee-4d83-98ac-84afa1a51451", 00:21:57.851 "optimal_io_boundary": 0 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_wait_for_examine" 00:21:57.851 } 00:21:57.851 ] 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "subsystem": "nbd", 00:21:57.851 "config": [] 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "subsystem": "scheduler", 00:21:57.851 "config": [ 00:21:57.852 { 00:21:57.852 "method": "framework_set_scheduler", 00:21:57.852 "params": { 00:21:57.852 "name": "static" 00:21:57.852 } 00:21:57.852 } 00:21:57.852 ] 00:21:57.852 }, 00:21:57.852 { 00:21:57.852 "subsystem": "nvmf", 00:21:57.852 "config": [ 00:21:57.852 { 00:21:57.852 "method": "nvmf_set_config", 00:21:57.852 "params": { 00:21:57.852 "discovery_filter": "match_any", 00:21:57.852 "admin_cmd_passthru": { 00:21:57.852 "identify_ctrlr": false 00:21:57.852 } 00:21:57.852 } 00:21:57.852 }, 00:21:57.852 { 00:21:57.852 "method": "nvmf_set_max_subsystems", 00:21:57.852 "params": { 00:21:57.852 "max_subsystems": 1024 00:21:57.852 } 00:21:57.852 }, 00:21:57.852 { 00:21:57.852 "method": "nvmf_set_crdt", 00:21:57.852 "params": { 00:21:57.852 "crdt1": 0, 00:21:57.852 "crdt2": 0, 00:21:57.852 "crdt3": 0 00:21:57.852 } 00:21:57.852 }, 00:21:57.852 { 00:21:57.852 "method": "nvmf_create_transport", 00:21:57.852 "params": { 00:21:57.852 "trtype": "TCP", 00:21:57.852 "max_queue_depth": 128, 00:21:57.852 "max_io_qpairs_per_ctrlr": 127, 00:21:57.852 "in_capsule_data_size": 4096, 00:21:57.852 "max_io_size": 131072, 00:21:57.852 "io_unit_size": 131072, 00:21:57.852 "max_aq_depth": 128, 00:21:57.852 "num_shared_buffers": 511, 00:21:57.852 "buf_cache_size": 4294967295, 00:21:57.852 "dif_insert_or_strip": false, 00:21:57.852 "zcopy": false, 00:21:57.852 "c2h_success": false, 00:21:57.852 "sock_priority": 0, 00:21:57.852 "abort_timeout_sec": 1, 00:21:57.852 "ack_timeout": 0, 00:21:57.852 "data_wr_pool_size": 0 00:21:57.852 } 00:21:57.852 }, 00:21:57.852 { 00:21:57.852 "method": "nvmf_create_subsystem", 00:21:57.852 "params": { 00:21:57.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.852 "allow_any_host": false, 00:21:57.852 "serial_number": "SPDK00000000000001", 00:21:57.852 "model_number": "SPDK bdev Controller", 00:21:57.852 "max_namespaces": 10, 00:21:57.852 "min_cntlid": 1, 00:21:57.852 "max_cntlid": 65519, 00:21:57.852 "ana_reporting": false 00:21:57.852 } 00:21:57.852 }, 00:21:57.852 { 00:21:57.852 "method": "nvmf_subsystem_add_host", 00:21:57.852 "params": { 00:21:57.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.852 "host": "nqn.2016-06.io.spdk:host1", 00:21:57.852 "psk": "/tmp/tmp.QridQIgu4A" 00:21:57.852 } 00:21:57.852 }, 00:21:57.852 { 00:21:57.852 "method": "nvmf_subsystem_add_ns", 00:21:57.852 "params": { 00:21:57.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.852 "namespace": { 00:21:57.852 "nsid": 1, 00:21:57.852 "bdev_name": "malloc0", 00:21:57.852 "nguid": "66177462BBEE4D8398AC84AFA1A51451", 00:21:57.852 "uuid": "66177462-bbee-4d83-98ac-84afa1a51451", 00:21:57.852 "no_auto_visible": false 00:21:57.852 } 00:21:57.852 } 00:21:57.852 }, 00:21:57.852 { 00:21:57.852 "method": "nvmf_subsystem_add_listener", 00:21:57.852 "params": { 00:21:57.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.852 "listen_address": { 00:21:57.852 "trtype": "TCP", 00:21:57.852 "adrfam": "IPv4", 00:21:57.852 "traddr": "10.0.0.2", 00:21:57.852 "trsvcid": "4420" 00:21:57.852 }, 00:21:57.852 "secure_channel": true 00:21:57.852 } 00:21:57.852 } 00:21:57.852 ] 00:21:57.852 } 00:21:57.852 ] 00:21:57.852 }' 00:21:57.852 09:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:58.111 09:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:58.111 "subsystems": [ 00:21:58.111 { 00:21:58.111 "subsystem": "keyring", 00:21:58.111 "config": [] 00:21:58.111 }, 00:21:58.111 { 00:21:58.111 "subsystem": "iobuf", 00:21:58.111 "config": [ 00:21:58.111 { 00:21:58.111 "method": "iobuf_set_options", 00:21:58.111 "params": { 00:21:58.111 "small_pool_count": 8192, 00:21:58.111 "large_pool_count": 1024, 00:21:58.111 "small_bufsize": 8192, 00:21:58.111 "large_bufsize": 135168 00:21:58.111 } 00:21:58.111 } 00:21:58.111 ] 00:21:58.111 }, 00:21:58.111 { 00:21:58.111 "subsystem": "sock", 00:21:58.111 "config": [ 00:21:58.111 { 00:21:58.111 "method": "sock_set_default_impl", 00:21:58.111 "params": { 00:21:58.111 "impl_name": "posix" 00:21:58.111 } 00:21:58.111 }, 00:21:58.111 { 00:21:58.111 "method": "sock_impl_set_options", 00:21:58.111 "params": { 00:21:58.111 "impl_name": "ssl", 00:21:58.111 "recv_buf_size": 4096, 00:21:58.111 "send_buf_size": 4096, 00:21:58.111 "enable_recv_pipe": true, 00:21:58.111 "enable_quickack": false, 00:21:58.111 "enable_placement_id": 0, 00:21:58.111 "enable_zerocopy_send_server": true, 00:21:58.111 "enable_zerocopy_send_client": false, 00:21:58.111 "zerocopy_threshold": 0, 00:21:58.111 "tls_version": 0, 00:21:58.111 "enable_ktls": false 00:21:58.111 } 00:21:58.111 }, 00:21:58.111 { 00:21:58.111 "method": "sock_impl_set_options", 00:21:58.111 "params": { 00:21:58.111 "impl_name": "posix", 00:21:58.111 "recv_buf_size": 2097152, 00:21:58.111 "send_buf_size": 2097152, 00:21:58.111 "enable_recv_pipe": true, 00:21:58.111 "enable_quickack": false, 00:21:58.111 "enable_placement_id": 0, 00:21:58.111 "enable_zerocopy_send_server": true, 00:21:58.111 "enable_zerocopy_send_client": false, 00:21:58.111 "zerocopy_threshold": 0, 00:21:58.111 "tls_version": 0, 00:21:58.111 "enable_ktls": false 00:21:58.111 } 00:21:58.111 } 00:21:58.111 ] 00:21:58.111 }, 00:21:58.111 { 00:21:58.111 "subsystem": "vmd", 00:21:58.111 "config": [] 00:21:58.111 }, 00:21:58.111 { 00:21:58.111 "subsystem": "accel", 00:21:58.111 "config": [ 00:21:58.111 { 00:21:58.111 "method": "accel_set_options", 00:21:58.111 "params": { 00:21:58.111 "small_cache_size": 128, 00:21:58.111 "large_cache_size": 16, 00:21:58.111 "task_count": 2048, 00:21:58.111 "sequence_count": 2048, 00:21:58.111 "buf_count": 2048 00:21:58.111 } 00:21:58.111 } 00:21:58.111 ] 00:21:58.111 }, 00:21:58.111 { 00:21:58.111 "subsystem": "bdev", 00:21:58.111 "config": [ 00:21:58.111 { 00:21:58.111 "method": "bdev_set_options", 00:21:58.111 "params": { 00:21:58.111 "bdev_io_pool_size": 65535, 00:21:58.111 "bdev_io_cache_size": 256, 00:21:58.111 "bdev_auto_examine": true, 00:21:58.111 "iobuf_small_cache_size": 128, 00:21:58.111 "iobuf_large_cache_size": 16 00:21:58.112 } 00:21:58.112 }, 00:21:58.112 { 00:21:58.112 "method": "bdev_raid_set_options", 00:21:58.112 "params": { 00:21:58.112 "process_window_size_kb": 1024 00:21:58.112 } 00:21:58.112 }, 00:21:58.112 { 00:21:58.112 "method": "bdev_iscsi_set_options", 00:21:58.112 "params": { 00:21:58.112 "timeout_sec": 30 00:21:58.112 } 00:21:58.112 }, 00:21:58.112 { 00:21:58.112 "method": "bdev_nvme_set_options", 00:21:58.112 "params": { 00:21:58.112 "action_on_timeout": "none", 00:21:58.112 "timeout_us": 0, 00:21:58.112 "timeout_admin_us": 0, 00:21:58.112 "keep_alive_timeout_ms": 10000, 00:21:58.112 "arbitration_burst": 0, 00:21:58.112 "low_priority_weight": 0, 00:21:58.112 "medium_priority_weight": 0, 00:21:58.112 "high_priority_weight": 0, 00:21:58.112 "nvme_adminq_poll_period_us": 10000, 00:21:58.112 "nvme_ioq_poll_period_us": 0, 00:21:58.112 "io_queue_requests": 512, 00:21:58.112 "delay_cmd_submit": true, 00:21:58.112 "transport_retry_count": 4, 00:21:58.112 "bdev_retry_count": 3, 00:21:58.112 "transport_ack_timeout": 0, 00:21:58.112 "ctrlr_loss_timeout_sec": 0, 00:21:58.112 "reconnect_delay_sec": 0, 00:21:58.112 "fast_io_fail_timeout_sec": 0, 00:21:58.112 "disable_auto_failback": false, 00:21:58.112 "generate_uuids": false, 00:21:58.112 "transport_tos": 0, 00:21:58.112 "nvme_error_stat": false, 00:21:58.112 "rdma_srq_size": 0, 00:21:58.112 "io_path_stat": false, 00:21:58.112 "allow_accel_sequence": false, 00:21:58.112 "rdma_max_cq_size": 0, 00:21:58.112 "rdma_cm_event_timeout_ms": 0, 00:21:58.112 "dhchap_digests": [ 00:21:58.112 "sha256", 00:21:58.112 "sha384", 00:21:58.112 "sha512" 00:21:58.112 ], 00:21:58.112 "dhchap_dhgroups": [ 00:21:58.112 "null", 00:21:58.112 "ffdhe2048", 00:21:58.112 "ffdhe3072", 00:21:58.112 "ffdhe4096", 00:21:58.112 "ffdhe6144", 00:21:58.112 "ffdhe8192" 00:21:58.112 ] 00:21:58.112 } 00:21:58.112 }, 00:21:58.112 { 00:21:58.112 "method": "bdev_nvme_attach_controller", 00:21:58.112 "params": { 00:21:58.112 "name": "TLSTEST", 00:21:58.112 "trtype": "TCP", 00:21:58.112 "adrfam": "IPv4", 00:21:58.112 "traddr": "10.0.0.2", 00:21:58.112 "trsvcid": "4420", 00:21:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.112 "prchk_reftag": false, 00:21:58.112 "prchk_guard": false, 00:21:58.112 "ctrlr_loss_timeout_sec": 0, 00:21:58.112 "reconnect_delay_sec": 0, 00:21:58.112 "fast_io_fail_timeout_sec": 0, 00:21:58.112 "psk": "/tmp/tmp.QridQIgu4A", 00:21:58.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.112 "hdgst": false, 00:21:58.112 "ddgst": false 00:21:58.112 } 00:21:58.112 }, 00:21:58.112 { 00:21:58.112 "method": "bdev_nvme_set_hotplug", 00:21:58.112 "params": { 00:21:58.112 "period_us": 100000, 00:21:58.112 "enable": false 00:21:58.112 } 00:21:58.112 }, 00:21:58.112 { 00:21:58.112 "method": "bdev_wait_for_examine" 00:21:58.112 } 00:21:58.112 ] 00:21:58.112 }, 00:21:58.112 { 00:21:58.112 "subsystem": "nbd", 00:21:58.112 "config": [] 00:21:58.112 } 00:21:58.112 ] 00:21:58.112 }' 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1940787 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1940787 ']' 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1940787 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940787 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940787' 00:21:58.112 killing process with pid 1940787 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1940787 00:21:58.112 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.112 00:21:58.112 Latency(us) 00:21:58.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.112 =================================================================================================================== 00:21:58.112 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.112 [2024-07-15 09:55:14.842984] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:58.112 09:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1940787 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1940627 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1940627 ']' 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1940627 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940627 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940627' 00:21:58.373 killing process with pid 1940627 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1940627 00:21:58.373 [2024-07-15 09:55:15.086320] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:58.373 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1940627 00:21:58.632 09:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:58.632 09:55:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.632 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.632 09:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:58.632 "subsystems": [ 00:21:58.632 { 00:21:58.632 "subsystem": "keyring", 00:21:58.632 "config": [] 00:21:58.632 }, 00:21:58.632 { 00:21:58.632 "subsystem": "iobuf", 00:21:58.632 "config": [ 00:21:58.632 { 00:21:58.632 "method": "iobuf_set_options", 00:21:58.632 "params": { 00:21:58.632 "small_pool_count": 8192, 00:21:58.632 "large_pool_count": 1024, 00:21:58.633 "small_bufsize": 8192, 00:21:58.633 "large_bufsize": 135168 00:21:58.633 } 00:21:58.633 } 00:21:58.633 ] 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "subsystem": "sock", 00:21:58.633 "config": [ 00:21:58.633 { 00:21:58.633 "method": "sock_set_default_impl", 00:21:58.633 "params": { 00:21:58.633 "impl_name": "posix" 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "sock_impl_set_options", 00:21:58.633 "params": { 00:21:58.633 "impl_name": "ssl", 00:21:58.633 "recv_buf_size": 4096, 00:21:58.633 "send_buf_size": 4096, 00:21:58.633 "enable_recv_pipe": true, 00:21:58.633 "enable_quickack": false, 00:21:58.633 "enable_placement_id": 0, 00:21:58.633 "enable_zerocopy_send_server": true, 00:21:58.633 "enable_zerocopy_send_client": false, 00:21:58.633 "zerocopy_threshold": 0, 00:21:58.633 "tls_version": 0, 00:21:58.633 "enable_ktls": false 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "sock_impl_set_options", 00:21:58.633 "params": { 00:21:58.633 "impl_name": "posix", 00:21:58.633 "recv_buf_size": 2097152, 00:21:58.633 "send_buf_size": 2097152, 00:21:58.633 "enable_recv_pipe": true, 00:21:58.633 "enable_quickack": false, 00:21:58.633 "enable_placement_id": 0, 00:21:58.633 "enable_zerocopy_send_server": true, 00:21:58.633 "enable_zerocopy_send_client": false, 00:21:58.633 "zerocopy_threshold": 0, 00:21:58.633 "tls_version": 0, 00:21:58.633 "enable_ktls": false 00:21:58.633 } 00:21:58.633 } 00:21:58.633 ] 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "subsystem": "vmd", 00:21:58.633 "config": [] 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "subsystem": "accel", 00:21:58.633 "config": [ 00:21:58.633 { 00:21:58.633 "method": "accel_set_options", 00:21:58.633 "params": { 00:21:58.633 "small_cache_size": 128, 00:21:58.633 "large_cache_size": 16, 00:21:58.633 "task_count": 2048, 00:21:58.633 "sequence_count": 2048, 00:21:58.633 "buf_count": 2048 00:21:58.633 } 00:21:58.633 } 00:21:58.633 ] 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "subsystem": "bdev", 00:21:58.633 "config": [ 00:21:58.633 { 00:21:58.633 "method": "bdev_set_options", 00:21:58.633 "params": { 00:21:58.633 "bdev_io_pool_size": 65535, 00:21:58.633 "bdev_io_cache_size": 256, 00:21:58.633 "bdev_auto_examine": true, 00:21:58.633 "iobuf_small_cache_size": 128, 00:21:58.633 "iobuf_large_cache_size": 16 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "bdev_raid_set_options", 00:21:58.633 "params": { 00:21:58.633 "process_window_size_kb": 1024 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "bdev_iscsi_set_options", 00:21:58.633 "params": { 00:21:58.633 "timeout_sec": 30 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "bdev_nvme_set_options", 00:21:58.633 "params": { 00:21:58.633 "action_on_timeout": "none", 00:21:58.633 "timeout_us": 0, 00:21:58.633 "timeout_admin_us": 0, 00:21:58.633 "keep_alive_timeout_ms": 10000, 00:21:58.633 "arbitration_burst": 0, 00:21:58.633 "low_priority_weight": 0, 00:21:58.633 "medium_priority_weight": 0, 00:21:58.633 "high_priority_weight": 0, 00:21:58.633 "nvme_adminq_poll_period_us": 10000, 00:21:58.633 "nvme_ioq_poll_period_us": 0, 00:21:58.633 "io_queue_requests": 0, 00:21:58.633 "delay_cmd_submit": true, 00:21:58.633 "transport_retry_count": 4, 00:21:58.633 "bdev_retry_count": 3, 00:21:58.633 "transport_ack_timeout": 0, 00:21:58.633 "ctrlr_loss_timeout_sec": 0, 00:21:58.633 "reconnect_delay_sec": 0, 00:21:58.633 "fast_io_fail_timeout_sec": 0, 00:21:58.633 "disable_auto_failback": false, 00:21:58.633 "generate_uuids": false, 00:21:58.633 "transport_tos": 0, 00:21:58.633 "nvme_error_stat": false, 00:21:58.633 "rdma_srq_size": 0, 00:21:58.633 "io_path_stat": false, 00:21:58.633 "allow_accel_sequence": false, 00:21:58.633 "rdma_max_cq_size": 0, 00:21:58.633 "rdma_cm_event_timeout_ms": 0, 00:21:58.633 "dhchap_digests": [ 00:21:58.633 "sha256", 00:21:58.633 "sha384", 00:21:58.633 "sha512" 00:21:58.633 ], 00:21:58.633 "dhchap_dhgroups": [ 00:21:58.633 "null", 00:21:58.633 "ffdhe2048", 00:21:58.633 "ffdhe3072", 00:21:58.633 "ffdhe4096", 00:21:58.633 "ffdhe6144", 00:21:58.633 "ffdhe8192" 00:21:58.633 ] 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "bdev_nvme_set_hotplug", 00:21:58.633 "params": { 00:21:58.633 "period_us": 100000, 00:21:58.633 "enable": false 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "bdev_malloc_create", 00:21:58.633 "params": { 00:21:58.633 "name": "malloc0", 00:21:58.633 "num_blocks": 8192, 00:21:58.633 "block_size": 4096, 00:21:58.633 "physical_block_size": 4096, 00:21:58.633 "uuid": "66177462-bbee-4d83-98ac-84afa1a51451", 00:21:58.633 "optimal_io_boundary": 0 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "bdev_wait_for_examine" 00:21:58.633 } 00:21:58.633 ] 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "subsystem": "nbd", 00:21:58.633 "config": [] 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "subsystem": "scheduler", 00:21:58.633 "config": [ 00:21:58.633 { 00:21:58.633 "method": "framework_set_scheduler", 00:21:58.633 "params": { 00:21:58.633 "name": "static" 00:21:58.633 } 00:21:58.633 } 00:21:58.633 ] 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "subsystem": "nvmf", 00:21:58.633 "config": [ 00:21:58.633 { 00:21:58.633 "method": "nvmf_set_config", 00:21:58.633 "params": { 00:21:58.633 "discovery_filter": "match_any", 00:21:58.633 "admin_cmd_passthru": { 00:21:58.633 "identify_ctrlr": false 00:21:58.633 } 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "nvmf_set_max_subsystems", 00:21:58.633 "params": { 00:21:58.633 "max_subsystems": 1024 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "nvmf_set_crdt", 00:21:58.633 "params": { 00:21:58.633 "crdt1": 0, 00:21:58.633 "crdt2": 0, 00:21:58.633 "crdt3": 0 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "nvmf_create_transport", 00:21:58.633 "params": { 00:21:58.633 "trtype": "TCP", 00:21:58.633 "max_queue_depth": 128, 00:21:58.633 "max_io_qpairs_per_ctrlr": 127, 00:21:58.633 "in_capsule_data_size": 4096, 00:21:58.633 "max_io_size": 131072, 00:21:58.633 "io_unit_size": 131072, 00:21:58.633 "max_aq_depth": 128, 00:21:58.633 "num_shared_buffers": 511, 00:21:58.633 "buf_cache_size": 4294967295, 00:21:58.633 "dif_insert_or_strip": false, 00:21:58.633 "zcopy": false, 00:21:58.633 "c2h_success": false, 00:21:58.633 "sock_priority": 0, 00:21:58.633 "abort_timeout_sec": 1, 00:21:58.633 "ack_timeout": 0, 00:21:58.633 "data_wr_pool_size": 0 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "nvmf_create_subsystem", 00:21:58.633 "params": { 00:21:58.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.633 "allow_any_host": false, 00:21:58.633 "serial_number": "SPDK00000000000001", 00:21:58.633 "model_number": "SPDK bdev Controller", 00:21:58.633 "max_namespaces": 10, 00:21:58.633 "min_cntlid": 1, 00:21:58.633 "max_cntlid": 65519, 00:21:58.633 "ana_reporting": false 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "nvmf_subsystem_add_host", 00:21:58.633 "params": { 00:21:58.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.633 "host": "nqn.2016-06.io.spdk:host1", 00:21:58.633 "psk": "/tmp/tmp.QridQIgu4A" 00:21:58.633 } 00:21:58.633 }, 00:21:58.633 { 00:21:58.633 "method": "nvmf_subsystem_add_ns", 00:21:58.633 "params": { 00:21:58.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.633 "namespace": { 00:21:58.633 "nsid": 1, 00:21:58.633 "bdev_name": "malloc0", 00:21:58.633 "nguid": "66177462BBEE4D8398AC84AFA1A51451", 00:21:58.633 "uuid": "66177462-bbee-4d83-98ac-84afa1a51451", 00:21:58.634 "no_auto_visible": false 00:21:58.634 } 00:21:58.634 } 00:21:58.634 }, 00:21:58.634 { 00:21:58.634 "method": "nvmf_subsystem_add_listener", 00:21:58.634 "params": { 00:21:58.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.634 "listen_address": { 00:21:58.634 "trtype": "TCP", 00:21:58.634 "adrfam": "IPv4", 00:21:58.634 "traddr": "10.0.0.2", 00:21:58.634 "trsvcid": "4420" 00:21:58.634 }, 00:21:58.634 "secure_channel": true 00:21:58.634 } 00:21:58.634 } 00:21:58.634 ] 00:21:58.634 } 00:21:58.634 ] 00:21:58.634 }' 00:21:58.634 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.634 09:55:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1941064 00:21:58.634 09:55:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:58.634 09:55:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1941064 00:21:58.634 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1941064 ']' 00:21:58.634 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.634 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.634 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.634 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.634 09:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.634 [2024-07-15 09:55:15.397241] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:58.634 [2024-07-15 09:55:15.397327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.893 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.893 [2024-07-15 09:55:15.435072] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:58.893 [2024-07-15 09:55:15.467078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.893 [2024-07-15 09:55:15.553558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.893 [2024-07-15 09:55:15.553623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.893 [2024-07-15 09:55:15.553639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.893 [2024-07-15 09:55:15.553652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.893 [2024-07-15 09:55:15.553665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.893 [2024-07-15 09:55:15.553753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.152 [2024-07-15 09:55:15.790640] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.152 [2024-07-15 09:55:15.806598] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:59.152 [2024-07-15 09:55:15.822644] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:59.152 [2024-07-15 09:55:15.832117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1941215 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1941215 /var/tmp/bdevperf.sock 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1941215 ']' 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.718 09:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:59.718 "subsystems": [ 00:21:59.718 { 00:21:59.718 "subsystem": "keyring", 00:21:59.718 "config": [] 00:21:59.718 }, 00:21:59.718 { 00:21:59.718 "subsystem": "iobuf", 00:21:59.718 "config": [ 00:21:59.718 { 00:21:59.718 "method": "iobuf_set_options", 00:21:59.718 "params": { 00:21:59.718 "small_pool_count": 8192, 00:21:59.718 "large_pool_count": 1024, 00:21:59.718 "small_bufsize": 8192, 00:21:59.718 "large_bufsize": 135168 00:21:59.719 } 00:21:59.719 } 00:21:59.719 ] 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "subsystem": "sock", 00:21:59.719 "config": [ 00:21:59.719 { 00:21:59.719 "method": "sock_set_default_impl", 00:21:59.719 "params": { 00:21:59.719 "impl_name": "posix" 00:21:59.719 } 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "method": "sock_impl_set_options", 00:21:59.719 "params": { 00:21:59.719 "impl_name": "ssl", 00:21:59.719 "recv_buf_size": 4096, 00:21:59.719 "send_buf_size": 4096, 00:21:59.719 "enable_recv_pipe": true, 00:21:59.719 "enable_quickack": false, 00:21:59.719 "enable_placement_id": 0, 00:21:59.719 "enable_zerocopy_send_server": true, 00:21:59.719 "enable_zerocopy_send_client": false, 00:21:59.719 "zerocopy_threshold": 0, 00:21:59.719 "tls_version": 0, 00:21:59.719 "enable_ktls": false 00:21:59.719 } 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "method": "sock_impl_set_options", 00:21:59.719 "params": { 00:21:59.719 "impl_name": "posix", 00:21:59.719 "recv_buf_size": 2097152, 00:21:59.719 "send_buf_size": 2097152, 00:21:59.719 "enable_recv_pipe": true, 00:21:59.719 "enable_quickack": false, 00:21:59.719 "enable_placement_id": 0, 00:21:59.719 "enable_zerocopy_send_server": true, 00:21:59.719 "enable_zerocopy_send_client": false, 00:21:59.719 "zerocopy_threshold": 0, 00:21:59.719 "tls_version": 0, 00:21:59.719 "enable_ktls": false 00:21:59.719 } 00:21:59.719 } 00:21:59.719 ] 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "subsystem": "vmd", 00:21:59.719 "config": [] 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "subsystem": "accel", 00:21:59.719 "config": [ 00:21:59.719 { 00:21:59.719 "method": "accel_set_options", 00:21:59.719 "params": { 00:21:59.719 "small_cache_size": 128, 00:21:59.719 "large_cache_size": 16, 00:21:59.719 "task_count": 2048, 00:21:59.719 "sequence_count": 2048, 00:21:59.719 "buf_count": 2048 00:21:59.719 } 00:21:59.719 } 00:21:59.719 ] 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "subsystem": "bdev", 00:21:59.719 "config": [ 00:21:59.719 { 00:21:59.719 "method": "bdev_set_options", 00:21:59.719 "params": { 00:21:59.719 "bdev_io_pool_size": 65535, 00:21:59.719 "bdev_io_cache_size": 256, 00:21:59.719 "bdev_auto_examine": true, 00:21:59.719 "iobuf_small_cache_size": 128, 00:21:59.719 "iobuf_large_cache_size": 16 00:21:59.719 } 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "method": "bdev_raid_set_options", 00:21:59.719 "params": { 00:21:59.719 "process_window_size_kb": 1024 00:21:59.719 } 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "method": "bdev_iscsi_set_options", 00:21:59.719 "params": { 00:21:59.719 "timeout_sec": 30 00:21:59.719 } 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "method": "bdev_nvme_set_options", 00:21:59.719 "params": { 00:21:59.719 "action_on_timeout": "none", 00:21:59.719 "timeout_us": 0, 00:21:59.719 "timeout_admin_us": 0, 00:21:59.719 "keep_alive_timeout_ms": 10000, 00:21:59.719 "arbitration_burst": 0, 00:21:59.719 "low_priority_weight": 0, 00:21:59.719 "medium_priority_weight": 0, 00:21:59.719 "high_priority_weight": 0, 00:21:59.719 "nvme_adminq_poll_period_us": 10000, 00:21:59.719 "nvme_ioq_poll_period_us": 0, 00:21:59.719 "io_queue_requests": 512, 00:21:59.719 "delay_cmd_submit": true, 00:21:59.719 "transport_retry_count": 4, 00:21:59.719 "bdev_retry_count": 3, 00:21:59.719 "transport_ack_timeout": 0, 00:21:59.719 "ctrlr_loss_timeout_sec": 0, 00:21:59.719 "reconnect_delay_sec": 0, 00:21:59.719 "fast_io_fail_timeout_sec": 0, 00:21:59.719 "disable_auto_failback": false, 00:21:59.719 "generate_uuids": false, 00:21:59.719 "transport_tos": 0, 00:21:59.719 "nvme_error_stat": false, 00:21:59.719 "rdma_srq_size": 0, 00:21:59.719 "io_path_stat": false, 00:21:59.719 "allow_accel_sequence": false, 00:21:59.719 "rdma_max_cq_size": 0, 00:21:59.719 "rdma_cm_event_timeout_ms": 0, 00:21:59.719 "dhchap_digests": [ 00:21:59.719 "sha256", 00:21:59.719 "sha384", 00:21:59.719 "sha512" 00:21:59.719 ], 00:21:59.719 "dhchap_dhgroups": [ 00:21:59.719 "null", 00:21:59.719 "ffdhe2048", 00:21:59.719 "ffdhe3072", 00:21:59.719 "ffdhe4096", 00:21:59.719 "ffdhe6144", 00:21:59.719 "ffdhe8192" 00:21:59.719 ] 00:21:59.719 } 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "method": "bdev_nvme_attach_controller", 00:21:59.719 "params": { 00:21:59.719 "name": "TLSTEST", 00:21:59.719 "trtype": "TCP", 00:21:59.719 "adrfam": "IPv4", 00:21:59.719 "traddr": "10.0.0.2", 00:21:59.719 "trsvcid": "4420", 00:21:59.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.719 "prchk_reftag": false, 00:21:59.719 "prchk_guard": false, 00:21:59.719 "ctrlr_loss_timeout_sec": 0, 00:21:59.719 "reconnect_delay_sec": 0, 00:21:59.719 "fast_io_fail_timeout_sec": 0, 00:21:59.719 "psk": "/tmp/tmp.QridQIgu4A", 00:21:59.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.719 "hdgst": false, 00:21:59.719 "ddgst": false 00:21:59.719 } 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "method": "bdev_nvme_set_hotplug", 00:21:59.719 "params": { 00:21:59.719 "period_us": 100000, 00:21:59.719 "enable": false 00:21:59.719 } 00:21:59.719 }, 00:21:59.719 { 00:21:59.719 "method": "bdev_wait_for_examine" 00:21:59.720 } 00:21:59.720 ] 00:21:59.720 }, 00:21:59.720 { 00:21:59.720 "subsystem": "nbd", 00:21:59.720 "config": [] 00:21:59.720 } 00:21:59.720 ] 00:21:59.720 }' 00:21:59.720 09:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.720 09:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.720 09:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.720 [2024-07-15 09:55:16.400470] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:59.720 [2024-07-15 09:55:16.400547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1941215 ] 00:21:59.720 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.720 [2024-07-15 09:55:16.431328] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:59.720 [2024-07-15 09:55:16.458858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.979 [2024-07-15 09:55:16.545910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.979 [2024-07-15 09:55:16.714223] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.979 [2024-07-15 09:55:16.714382] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:00.913 09:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.913 09:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:00.913 09:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:00.913 Running I/O for 10 seconds... 00:22:10.958 00:22:10.958 Latency(us) 00:22:10.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.958 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:10.958 Verification LBA range: start 0x0 length 0x2000 00:22:10.958 TLSTESTn1 : 10.06 2857.80 11.16 0.00 0.00 44661.18 8641.04 56312.41 00:22:10.958 =================================================================================================================== 00:22:10.958 Total : 2857.80 11.16 0.00 0.00 44661.18 8641.04 56312.41 00:22:10.958 0 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1941215 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1941215 ']' 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1941215 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1941215 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1941215' 00:22:10.958 killing process with pid 1941215 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1941215 00:22:10.958 Received shutdown signal, test time was about 10.000000 seconds 00:22:10.958 00:22:10.958 Latency(us) 00:22:10.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.958 =================================================================================================================== 00:22:10.958 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.958 [2024-07-15 09:55:27.600187] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:10.958 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1941215 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1941064 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1941064 ']' 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1941064 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1941064 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1941064' 00:22:11.218 killing process with pid 1941064 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1941064 00:22:11.218 [2024-07-15 09:55:27.856076] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:11.218 09:55:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1941064 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1942540 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1942540 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1942540 ']' 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.477 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.478 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.478 [2024-07-15 09:55:28.168611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:11.478 [2024-07-15 09:55:28.168697] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.478 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.478 [2024-07-15 09:55:28.212675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:11.478 [2024-07-15 09:55:28.240016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.736 [2024-07-15 09:55:28.327343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.736 [2024-07-15 09:55:28.327408] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.736 [2024-07-15 09:55:28.327421] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.736 [2024-07-15 09:55:28.327432] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.736 [2024-07-15 09:55:28.327442] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.736 [2024-07-15 09:55:28.327475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.736 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.736 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:11.736 09:55:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.736 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:11.736 09:55:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.736 09:55:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.736 09:55:28 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.QridQIgu4A 00:22:11.736 09:55:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QridQIgu4A 00:22:11.736 09:55:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:11.994 [2024-07-15 09:55:28.732668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.994 09:55:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:12.252 09:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:12.510 [2024-07-15 09:55:29.221974] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:12.510 [2024-07-15 09:55:29.222217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.510 09:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:12.768 malloc0 00:22:12.768 09:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:13.026 09:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QridQIgu4A 00:22:13.284 [2024-07-15 09:55:29.963082] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:13.284 09:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1942823 00:22:13.284 09:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:13.284 09:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.284 09:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1942823 /var/tmp/bdevperf.sock 00:22:13.284 09:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1942823 ']' 00:22:13.284 09:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.284 09:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.284 09:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.284 09:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.284 09:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.284 [2024-07-15 09:55:30.030788] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:13.284 [2024-07-15 09:55:30.030887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1942823 ] 00:22:13.284 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.284 [2024-07-15 09:55:30.062816] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:13.541 [2024-07-15 09:55:30.095052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.541 [2024-07-15 09:55:30.186709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.541 09:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.541 09:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:13.541 09:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QridQIgu4A 00:22:13.799 09:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:14.057 [2024-07-15 09:55:30.783618] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.314 nvme0n1 00:22:14.314 09:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.314 Running I/O for 1 seconds... 00:22:15.250 00:22:15.250 Latency(us) 00:22:15.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.250 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:15.250 Verification LBA range: start 0x0 length 0x2000 00:22:15.250 nvme0n1 : 1.03 3190.32 12.46 0.00 0.00 39463.81 6505.05 62526.20 00:22:15.250 =================================================================================================================== 00:22:15.250 Total : 3190.32 12.46 0.00 0.00 39463.81 6505.05 62526.20 00:22:15.250 0 00:22:15.250 09:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1942823 00:22:15.250 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1942823 ']' 00:22:15.250 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1942823 00:22:15.250 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:15.250 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.250 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1942823 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1942823' 00:22:15.508 killing process with pid 1942823 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1942823 00:22:15.508 Received shutdown signal, test time was about 1.000000 seconds 00:22:15.508 00:22:15.508 Latency(us) 00:22:15.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.508 =================================================================================================================== 00:22:15.508 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1942823 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1942540 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1942540 ']' 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1942540 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.508 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1942540 00:22:15.765 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:15.765 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:15.765 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1942540' 00:22:15.765 killing process with pid 1942540 00:22:15.765 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1942540 00:22:15.765 [2024-07-15 09:55:32.304123] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:15.765 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1942540 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1943103 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1943103 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1943103 ']' 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.023 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.023 [2024-07-15 09:55:32.603840] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:16.023 [2024-07-15 09:55:32.603936] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.023 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.023 [2024-07-15 09:55:32.639870] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.023 [2024-07-15 09:55:32.667026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.023 [2024-07-15 09:55:32.753631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.023 [2024-07-15 09:55:32.753683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.023 [2024-07-15 09:55:32.753710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.023 [2024-07-15 09:55:32.753721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.023 [2024-07-15 09:55:32.753730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.023 [2024-07-15 09:55:32.753761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.281 [2024-07-15 09:55:32.896775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.281 malloc0 00:22:16.281 [2024-07-15 09:55:32.928714] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:16.281 [2024-07-15 09:55:32.929018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1943122 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1943122 /var/tmp/bdevperf.sock 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1943122 ']' 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.281 09:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.281 [2024-07-15 09:55:33.001661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:16.281 [2024-07-15 09:55:33.001734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943122 ] 00:22:16.281 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.281 [2024-07-15 09:55:33.032688] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.282 [2024-07-15 09:55:33.059930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.540 [2024-07-15 09:55:33.154331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.540 09:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.540 09:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:16.540 09:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QridQIgu4A 00:22:16.798 09:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:17.056 [2024-07-15 09:55:33.726025] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.056 nvme0n1 00:22:17.056 09:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:17.314 Running I/O for 1 seconds... 00:22:18.269 00:22:18.269 Latency(us) 00:22:18.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.269 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:18.269 Verification LBA range: start 0x0 length 0x2000 00:22:18.269 nvme0n1 : 1.04 3086.73 12.06 0.00 0.00 40745.33 9951.76 67186.54 00:22:18.269 =================================================================================================================== 00:22:18.269 Total : 3086.73 12.06 0.00 0.00 40745.33 9951.76 67186.54 00:22:18.269 0 00:22:18.269 09:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:18.269 09:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.269 09:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.527 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.527 09:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:18.527 "subsystems": [ 00:22:18.527 { 00:22:18.527 "subsystem": "keyring", 00:22:18.527 "config": [ 00:22:18.527 { 00:22:18.527 "method": "keyring_file_add_key", 00:22:18.527 "params": { 00:22:18.527 "name": "key0", 00:22:18.527 "path": "/tmp/tmp.QridQIgu4A" 00:22:18.527 } 00:22:18.527 } 00:22:18.527 ] 00:22:18.527 }, 00:22:18.527 { 00:22:18.527 "subsystem": "iobuf", 00:22:18.527 "config": [ 00:22:18.527 { 00:22:18.527 "method": "iobuf_set_options", 00:22:18.527 "params": { 00:22:18.527 "small_pool_count": 8192, 00:22:18.527 "large_pool_count": 1024, 00:22:18.527 "small_bufsize": 8192, 00:22:18.527 "large_bufsize": 135168 00:22:18.527 } 00:22:18.527 } 00:22:18.527 ] 00:22:18.527 }, 00:22:18.527 { 00:22:18.527 "subsystem": "sock", 00:22:18.527 "config": [ 00:22:18.527 { 00:22:18.527 "method": "sock_set_default_impl", 00:22:18.527 "params": { 00:22:18.527 "impl_name": "posix" 00:22:18.527 } 00:22:18.527 }, 00:22:18.527 { 00:22:18.527 "method": "sock_impl_set_options", 00:22:18.527 "params": { 00:22:18.527 "impl_name": "ssl", 00:22:18.527 "recv_buf_size": 4096, 00:22:18.527 "send_buf_size": 4096, 00:22:18.527 "enable_recv_pipe": true, 00:22:18.527 "enable_quickack": false, 00:22:18.528 "enable_placement_id": 0, 00:22:18.528 "enable_zerocopy_send_server": true, 00:22:18.528 "enable_zerocopy_send_client": false, 00:22:18.528 "zerocopy_threshold": 0, 00:22:18.528 "tls_version": 0, 00:22:18.528 "enable_ktls": false 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "sock_impl_set_options", 00:22:18.528 "params": { 00:22:18.528 "impl_name": "posix", 00:22:18.528 "recv_buf_size": 2097152, 00:22:18.528 "send_buf_size": 2097152, 00:22:18.528 "enable_recv_pipe": true, 00:22:18.528 "enable_quickack": false, 00:22:18.528 "enable_placement_id": 0, 00:22:18.528 "enable_zerocopy_send_server": true, 00:22:18.528 "enable_zerocopy_send_client": false, 00:22:18.528 "zerocopy_threshold": 0, 00:22:18.528 "tls_version": 0, 00:22:18.528 "enable_ktls": false 00:22:18.528 } 00:22:18.528 } 00:22:18.528 ] 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "subsystem": "vmd", 00:22:18.528 "config": [] 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "subsystem": "accel", 00:22:18.528 "config": [ 00:22:18.528 { 00:22:18.528 "method": "accel_set_options", 00:22:18.528 "params": { 00:22:18.528 "small_cache_size": 128, 00:22:18.528 "large_cache_size": 16, 00:22:18.528 "task_count": 2048, 00:22:18.528 "sequence_count": 2048, 00:22:18.528 "buf_count": 2048 00:22:18.528 } 00:22:18.528 } 00:22:18.528 ] 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "subsystem": "bdev", 00:22:18.528 "config": [ 00:22:18.528 { 00:22:18.528 "method": "bdev_set_options", 00:22:18.528 "params": { 00:22:18.528 "bdev_io_pool_size": 65535, 00:22:18.528 "bdev_io_cache_size": 256, 00:22:18.528 "bdev_auto_examine": true, 00:22:18.528 "iobuf_small_cache_size": 128, 00:22:18.528 "iobuf_large_cache_size": 16 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "bdev_raid_set_options", 00:22:18.528 "params": { 00:22:18.528 "process_window_size_kb": 1024 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "bdev_iscsi_set_options", 00:22:18.528 "params": { 00:22:18.528 "timeout_sec": 30 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "bdev_nvme_set_options", 00:22:18.528 "params": { 00:22:18.528 "action_on_timeout": "none", 00:22:18.528 "timeout_us": 0, 00:22:18.528 "timeout_admin_us": 0, 00:22:18.528 "keep_alive_timeout_ms": 10000, 00:22:18.528 "arbitration_burst": 0, 00:22:18.528 "low_priority_weight": 0, 00:22:18.528 "medium_priority_weight": 0, 00:22:18.528 "high_priority_weight": 0, 00:22:18.528 "nvme_adminq_poll_period_us": 10000, 00:22:18.528 "nvme_ioq_poll_period_us": 0, 00:22:18.528 "io_queue_requests": 0, 00:22:18.528 "delay_cmd_submit": true, 00:22:18.528 "transport_retry_count": 4, 00:22:18.528 "bdev_retry_count": 3, 00:22:18.528 "transport_ack_timeout": 0, 00:22:18.528 "ctrlr_loss_timeout_sec": 0, 00:22:18.528 "reconnect_delay_sec": 0, 00:22:18.528 "fast_io_fail_timeout_sec": 0, 00:22:18.528 "disable_auto_failback": false, 00:22:18.528 "generate_uuids": false, 00:22:18.528 "transport_tos": 0, 00:22:18.528 "nvme_error_stat": false, 00:22:18.528 "rdma_srq_size": 0, 00:22:18.528 "io_path_stat": false, 00:22:18.528 "allow_accel_sequence": false, 00:22:18.528 "rdma_max_cq_size": 0, 00:22:18.528 "rdma_cm_event_timeout_ms": 0, 00:22:18.528 "dhchap_digests": [ 00:22:18.528 "sha256", 00:22:18.528 "sha384", 00:22:18.528 "sha512" 00:22:18.528 ], 00:22:18.528 "dhchap_dhgroups": [ 00:22:18.528 "null", 00:22:18.528 "ffdhe2048", 00:22:18.528 "ffdhe3072", 00:22:18.528 "ffdhe4096", 00:22:18.528 "ffdhe6144", 00:22:18.528 "ffdhe8192" 00:22:18.528 ] 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "bdev_nvme_set_hotplug", 00:22:18.528 "params": { 00:22:18.528 "period_us": 100000, 00:22:18.528 "enable": false 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "bdev_malloc_create", 00:22:18.528 "params": { 00:22:18.528 "name": "malloc0", 00:22:18.528 "num_blocks": 8192, 00:22:18.528 "block_size": 4096, 00:22:18.528 "physical_block_size": 4096, 00:22:18.528 "uuid": "a908f2d2-e739-40b0-a152-5ece42ec3c20", 00:22:18.528 "optimal_io_boundary": 0 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "bdev_wait_for_examine" 00:22:18.528 } 00:22:18.528 ] 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "subsystem": "nbd", 00:22:18.528 "config": [] 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "subsystem": "scheduler", 00:22:18.528 "config": [ 00:22:18.528 { 00:22:18.528 "method": "framework_set_scheduler", 00:22:18.528 "params": { 00:22:18.528 "name": "static" 00:22:18.528 } 00:22:18.528 } 00:22:18.528 ] 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "subsystem": "nvmf", 00:22:18.528 "config": [ 00:22:18.528 { 00:22:18.528 "method": "nvmf_set_config", 00:22:18.528 "params": { 00:22:18.528 "discovery_filter": "match_any", 00:22:18.528 "admin_cmd_passthru": { 00:22:18.528 "identify_ctrlr": false 00:22:18.528 } 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "nvmf_set_max_subsystems", 00:22:18.528 "params": { 00:22:18.528 "max_subsystems": 1024 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "nvmf_set_crdt", 00:22:18.528 "params": { 00:22:18.528 "crdt1": 0, 00:22:18.528 "crdt2": 0, 00:22:18.528 "crdt3": 0 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "nvmf_create_transport", 00:22:18.528 "params": { 00:22:18.528 "trtype": "TCP", 00:22:18.528 "max_queue_depth": 128, 00:22:18.528 "max_io_qpairs_per_ctrlr": 127, 00:22:18.528 "in_capsule_data_size": 4096, 00:22:18.528 "max_io_size": 131072, 00:22:18.528 "io_unit_size": 131072, 00:22:18.528 "max_aq_depth": 128, 00:22:18.528 "num_shared_buffers": 511, 00:22:18.528 "buf_cache_size": 4294967295, 00:22:18.528 "dif_insert_or_strip": false, 00:22:18.528 "zcopy": false, 00:22:18.528 "c2h_success": false, 00:22:18.528 "sock_priority": 0, 00:22:18.528 "abort_timeout_sec": 1, 00:22:18.528 "ack_timeout": 0, 00:22:18.528 "data_wr_pool_size": 0 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "nvmf_create_subsystem", 00:22:18.528 "params": { 00:22:18.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.528 "allow_any_host": false, 00:22:18.528 "serial_number": "00000000000000000000", 00:22:18.528 "model_number": "SPDK bdev Controller", 00:22:18.528 "max_namespaces": 32, 00:22:18.528 "min_cntlid": 1, 00:22:18.528 "max_cntlid": 65519, 00:22:18.528 "ana_reporting": false 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "nvmf_subsystem_add_host", 00:22:18.528 "params": { 00:22:18.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.528 "host": "nqn.2016-06.io.spdk:host1", 00:22:18.528 "psk": "key0" 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "nvmf_subsystem_add_ns", 00:22:18.528 "params": { 00:22:18.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.528 "namespace": { 00:22:18.528 "nsid": 1, 00:22:18.528 "bdev_name": "malloc0", 00:22:18.528 "nguid": "A908F2D2E73940B0A1525ECE42EC3C20", 00:22:18.528 "uuid": "a908f2d2-e739-40b0-a152-5ece42ec3c20", 00:22:18.528 "no_auto_visible": false 00:22:18.528 } 00:22:18.528 } 00:22:18.528 }, 00:22:18.528 { 00:22:18.528 "method": "nvmf_subsystem_add_listener", 00:22:18.528 "params": { 00:22:18.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.528 "listen_address": { 00:22:18.528 "trtype": "TCP", 00:22:18.528 "adrfam": "IPv4", 00:22:18.528 "traddr": "10.0.0.2", 00:22:18.528 "trsvcid": "4420" 00:22:18.528 }, 00:22:18.528 "secure_channel": true 00:22:18.528 } 00:22:18.528 } 00:22:18.528 ] 00:22:18.528 } 00:22:18.528 ] 00:22:18.528 }' 00:22:18.528 09:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:18.787 09:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:18.787 "subsystems": [ 00:22:18.787 { 00:22:18.787 "subsystem": "keyring", 00:22:18.787 "config": [ 00:22:18.787 { 00:22:18.787 "method": "keyring_file_add_key", 00:22:18.787 "params": { 00:22:18.787 "name": "key0", 00:22:18.787 "path": "/tmp/tmp.QridQIgu4A" 00:22:18.787 } 00:22:18.787 } 00:22:18.787 ] 00:22:18.787 }, 00:22:18.787 { 00:22:18.787 "subsystem": "iobuf", 00:22:18.787 "config": [ 00:22:18.787 { 00:22:18.787 "method": "iobuf_set_options", 00:22:18.787 "params": { 00:22:18.787 "small_pool_count": 8192, 00:22:18.787 "large_pool_count": 1024, 00:22:18.787 "small_bufsize": 8192, 00:22:18.787 "large_bufsize": 135168 00:22:18.787 } 00:22:18.787 } 00:22:18.787 ] 00:22:18.787 }, 00:22:18.787 { 00:22:18.787 "subsystem": "sock", 00:22:18.787 "config": [ 00:22:18.787 { 00:22:18.787 "method": "sock_set_default_impl", 00:22:18.787 "params": { 00:22:18.787 "impl_name": "posix" 00:22:18.787 } 00:22:18.787 }, 00:22:18.787 { 00:22:18.787 "method": "sock_impl_set_options", 00:22:18.787 "params": { 00:22:18.787 "impl_name": "ssl", 00:22:18.787 "recv_buf_size": 4096, 00:22:18.787 "send_buf_size": 4096, 00:22:18.787 "enable_recv_pipe": true, 00:22:18.787 "enable_quickack": false, 00:22:18.787 "enable_placement_id": 0, 00:22:18.787 "enable_zerocopy_send_server": true, 00:22:18.787 "enable_zerocopy_send_client": false, 00:22:18.787 "zerocopy_threshold": 0, 00:22:18.787 "tls_version": 0, 00:22:18.787 "enable_ktls": false 00:22:18.787 } 00:22:18.787 }, 00:22:18.787 { 00:22:18.787 "method": "sock_impl_set_options", 00:22:18.787 "params": { 00:22:18.787 "impl_name": "posix", 00:22:18.787 "recv_buf_size": 2097152, 00:22:18.787 "send_buf_size": 2097152, 00:22:18.787 "enable_recv_pipe": true, 00:22:18.787 "enable_quickack": false, 00:22:18.787 "enable_placement_id": 0, 00:22:18.787 "enable_zerocopy_send_server": true, 00:22:18.787 "enable_zerocopy_send_client": false, 00:22:18.787 "zerocopy_threshold": 0, 00:22:18.787 "tls_version": 0, 00:22:18.787 "enable_ktls": false 00:22:18.787 } 00:22:18.787 } 00:22:18.787 ] 00:22:18.787 }, 00:22:18.787 { 00:22:18.787 "subsystem": "vmd", 00:22:18.787 "config": [] 00:22:18.787 }, 00:22:18.787 { 00:22:18.787 "subsystem": "accel", 00:22:18.787 "config": [ 00:22:18.787 { 00:22:18.787 "method": "accel_set_options", 00:22:18.787 "params": { 00:22:18.787 "small_cache_size": 128, 00:22:18.787 "large_cache_size": 16, 00:22:18.787 "task_count": 2048, 00:22:18.787 "sequence_count": 2048, 00:22:18.787 "buf_count": 2048 00:22:18.787 } 00:22:18.787 } 00:22:18.787 ] 00:22:18.787 }, 00:22:18.787 { 00:22:18.787 "subsystem": "bdev", 00:22:18.787 "config": [ 00:22:18.787 { 00:22:18.787 "method": "bdev_set_options", 00:22:18.787 "params": { 00:22:18.787 "bdev_io_pool_size": 65535, 00:22:18.787 "bdev_io_cache_size": 256, 00:22:18.788 "bdev_auto_examine": true, 00:22:18.788 "iobuf_small_cache_size": 128, 00:22:18.788 "iobuf_large_cache_size": 16 00:22:18.788 } 00:22:18.788 }, 00:22:18.788 { 00:22:18.788 "method": "bdev_raid_set_options", 00:22:18.788 "params": { 00:22:18.788 "process_window_size_kb": 1024 00:22:18.788 } 00:22:18.788 }, 00:22:18.788 { 00:22:18.788 "method": "bdev_iscsi_set_options", 00:22:18.788 "params": { 00:22:18.788 "timeout_sec": 30 00:22:18.788 } 00:22:18.788 }, 00:22:18.788 { 00:22:18.788 "method": "bdev_nvme_set_options", 00:22:18.788 "params": { 00:22:18.788 "action_on_timeout": "none", 00:22:18.788 "timeout_us": 0, 00:22:18.788 "timeout_admin_us": 0, 00:22:18.788 "keep_alive_timeout_ms": 10000, 00:22:18.788 "arbitration_burst": 0, 00:22:18.788 "low_priority_weight": 0, 00:22:18.788 "medium_priority_weight": 0, 00:22:18.788 "high_priority_weight": 0, 00:22:18.788 "nvme_adminq_poll_period_us": 10000, 00:22:18.788 "nvme_ioq_poll_period_us": 0, 00:22:18.788 "io_queue_requests": 512, 00:22:18.788 "delay_cmd_submit": true, 00:22:18.788 "transport_retry_count": 4, 00:22:18.788 "bdev_retry_count": 3, 00:22:18.788 "transport_ack_timeout": 0, 00:22:18.788 "ctrlr_loss_timeout_sec": 0, 00:22:18.788 "reconnect_delay_sec": 0, 00:22:18.788 "fast_io_fail_timeout_sec": 0, 00:22:18.788 "disable_auto_failback": false, 00:22:18.788 "generate_uuids": false, 00:22:18.788 "transport_tos": 0, 00:22:18.788 "nvme_error_stat": false, 00:22:18.788 "rdma_srq_size": 0, 00:22:18.788 "io_path_stat": false, 00:22:18.788 "allow_accel_sequence": false, 00:22:18.788 "rdma_max_cq_size": 0, 00:22:18.788 "rdma_cm_event_timeout_ms": 0, 00:22:18.788 "dhchap_digests": [ 00:22:18.788 "sha256", 00:22:18.788 "sha384", 00:22:18.788 "sha512" 00:22:18.788 ], 00:22:18.788 "dhchap_dhgroups": [ 00:22:18.788 "null", 00:22:18.788 "ffdhe2048", 00:22:18.788 "ffdhe3072", 00:22:18.788 "ffdhe4096", 00:22:18.788 "ffdhe6144", 00:22:18.788 "ffdhe8192" 00:22:18.788 ] 00:22:18.788 } 00:22:18.788 }, 00:22:18.788 { 00:22:18.788 "method": "bdev_nvme_attach_controller", 00:22:18.788 "params": { 00:22:18.788 "name": "nvme0", 00:22:18.788 "trtype": "TCP", 00:22:18.788 "adrfam": "IPv4", 00:22:18.788 "traddr": "10.0.0.2", 00:22:18.788 "trsvcid": "4420", 00:22:18.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.788 "prchk_reftag": false, 00:22:18.788 "prchk_guard": false, 00:22:18.788 "ctrlr_loss_timeout_sec": 0, 00:22:18.788 "reconnect_delay_sec": 0, 00:22:18.788 "fast_io_fail_timeout_sec": 0, 00:22:18.788 "psk": "key0", 00:22:18.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.788 "hdgst": false, 00:22:18.788 "ddgst": false 00:22:18.788 } 00:22:18.788 }, 00:22:18.788 { 00:22:18.788 "method": "bdev_nvme_set_hotplug", 00:22:18.788 "params": { 00:22:18.788 "period_us": 100000, 00:22:18.788 "enable": false 00:22:18.788 } 00:22:18.788 }, 00:22:18.788 { 00:22:18.788 "method": "bdev_enable_histogram", 00:22:18.788 "params": { 00:22:18.788 "name": "nvme0n1", 00:22:18.788 "enable": true 00:22:18.788 } 00:22:18.788 }, 00:22:18.788 { 00:22:18.788 "method": "bdev_wait_for_examine" 00:22:18.788 } 00:22:18.788 ] 00:22:18.788 }, 00:22:18.788 { 00:22:18.788 "subsystem": "nbd", 00:22:18.788 "config": [] 00:22:18.788 } 00:22:18.788 ] 00:22:18.788 }' 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1943122 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1943122 ']' 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1943122 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1943122 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1943122' 00:22:18.788 killing process with pid 1943122 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1943122 00:22:18.788 Received shutdown signal, test time was about 1.000000 seconds 00:22:18.788 00:22:18.788 Latency(us) 00:22:18.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.788 =================================================================================================================== 00:22:18.788 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.788 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1943122 00:22:19.046 09:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1943103 00:22:19.046 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1943103 ']' 00:22:19.046 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1943103 00:22:19.046 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:19.046 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:19.046 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1943103 00:22:19.046 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:19.046 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:19.046 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1943103' 00:22:19.047 killing process with pid 1943103 00:22:19.047 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1943103 00:22:19.047 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1943103 00:22:19.304 09:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:19.304 09:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.304 09:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:19.304 "subsystems": [ 00:22:19.304 { 00:22:19.304 "subsystem": "keyring", 00:22:19.304 "config": [ 00:22:19.304 { 00:22:19.304 "method": "keyring_file_add_key", 00:22:19.304 "params": { 00:22:19.304 "name": "key0", 00:22:19.304 "path": "/tmp/tmp.QridQIgu4A" 00:22:19.304 } 00:22:19.304 } 00:22:19.304 ] 00:22:19.304 }, 00:22:19.304 { 00:22:19.304 "subsystem": "iobuf", 00:22:19.304 "config": [ 00:22:19.304 { 00:22:19.304 "method": "iobuf_set_options", 00:22:19.304 "params": { 00:22:19.304 "small_pool_count": 8192, 00:22:19.304 "large_pool_count": 1024, 00:22:19.304 "small_bufsize": 8192, 00:22:19.304 "large_bufsize": 135168 00:22:19.304 } 00:22:19.304 } 00:22:19.304 ] 00:22:19.304 }, 00:22:19.304 { 00:22:19.304 "subsystem": "sock", 00:22:19.304 "config": [ 00:22:19.304 { 00:22:19.304 "method": "sock_set_default_impl", 00:22:19.304 "params": { 00:22:19.304 "impl_name": "posix" 00:22:19.304 } 00:22:19.304 }, 00:22:19.304 { 00:22:19.304 "method": "sock_impl_set_options", 00:22:19.304 "params": { 00:22:19.304 "impl_name": "ssl", 00:22:19.304 "recv_buf_size": 4096, 00:22:19.304 "send_buf_size": 4096, 00:22:19.304 "enable_recv_pipe": true, 00:22:19.304 "enable_quickack": false, 00:22:19.304 "enable_placement_id": 0, 00:22:19.304 "enable_zerocopy_send_server": true, 00:22:19.304 "enable_zerocopy_send_client": false, 00:22:19.304 "zerocopy_threshold": 0, 00:22:19.304 "tls_version": 0, 00:22:19.304 "enable_ktls": false 00:22:19.304 } 00:22:19.304 }, 00:22:19.304 { 00:22:19.304 "method": "sock_impl_set_options", 00:22:19.304 "params": { 00:22:19.304 "impl_name": "posix", 00:22:19.304 "recv_buf_size": 2097152, 00:22:19.304 "send_buf_size": 2097152, 00:22:19.304 "enable_recv_pipe": true, 00:22:19.304 "enable_quickack": false, 00:22:19.304 "enable_placement_id": 0, 00:22:19.304 "enable_zerocopy_send_server": true, 00:22:19.304 "enable_zerocopy_send_client": false, 00:22:19.304 "zerocopy_threshold": 0, 00:22:19.304 "tls_version": 0, 00:22:19.304 "enable_ktls": false 00:22:19.304 } 00:22:19.304 } 00:22:19.304 ] 00:22:19.304 }, 00:22:19.304 { 00:22:19.304 "subsystem": "vmd", 00:22:19.304 "config": [] 00:22:19.304 }, 00:22:19.304 { 00:22:19.304 "subsystem": "accel", 00:22:19.304 "config": [ 00:22:19.304 { 00:22:19.304 "method": "accel_set_options", 00:22:19.304 "params": { 00:22:19.305 "small_cache_size": 128, 00:22:19.305 "large_cache_size": 16, 00:22:19.305 "task_count": 2048, 00:22:19.305 "sequence_count": 2048, 00:22:19.305 "buf_count": 2048 00:22:19.305 } 00:22:19.305 } 00:22:19.305 ] 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "subsystem": "bdev", 00:22:19.305 "config": [ 00:22:19.305 { 00:22:19.305 "method": "bdev_set_options", 00:22:19.305 "params": { 00:22:19.305 "bdev_io_pool_size": 65535, 00:22:19.305 "bdev_io_cache_size": 256, 00:22:19.305 "bdev_auto_examine": true, 00:22:19.305 "iobuf_small_cache_size": 128, 00:22:19.305 "iobuf_large_cache_size": 16 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "bdev_raid_set_options", 00:22:19.305 "params": { 00:22:19.305 "process_window_size_kb": 1024 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "bdev_iscsi_set_options", 00:22:19.305 "params": { 00:22:19.305 "timeout_sec": 30 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "bdev_nvme_set_options", 00:22:19.305 "params": { 00:22:19.305 "action_on_timeout": "none", 00:22:19.305 "timeout_us": 0, 00:22:19.305 "timeout_admin_us": 0, 00:22:19.305 "keep_alive_timeout_ms": 10000, 00:22:19.305 "arbitration_burst": 0, 00:22:19.305 "low_priority_weight": 0, 00:22:19.305 "medium_priority_weight": 0, 00:22:19.305 "high_priority_weight": 0, 00:22:19.305 "nvme_adminq_poll_period_us": 10000, 00:22:19.305 "nvme_ioq_poll_period_us": 0, 00:22:19.305 "io_queue_requests": 0, 00:22:19.305 "delay_cmd_submit": true, 00:22:19.305 "transport_retry_count": 4, 00:22:19.305 "bdev_retry_count": 3, 00:22:19.305 "transport_ack_timeout": 0, 00:22:19.305 "ctrlr_loss_timeout_sec": 0, 00:22:19.305 "reconnect_delay_sec": 0, 00:22:19.305 "fast_io_fail_timeout_sec": 0, 00:22:19.305 "disable_auto_failback": false, 00:22:19.305 "generate_uuids": false, 00:22:19.305 "transport_tos": 0, 00:22:19.305 "nvme_error_stat": false, 00:22:19.305 "rdma_srq_size": 0, 00:22:19.305 "io_path_stat": false, 00:22:19.305 "allow_accel_sequence": false, 00:22:19.305 "rdma_max_cq_size": 0, 00:22:19.305 "rdma_cm_event_timeout_ms": 0, 00:22:19.305 "dhchap_digests": [ 00:22:19.305 "sha256", 00:22:19.305 "sha384", 00:22:19.305 "sha512" 00:22:19.305 ], 00:22:19.305 "dhchap_dhgroups": [ 00:22:19.305 "null", 00:22:19.305 "ffdhe2048", 00:22:19.305 "ffdhe3072", 00:22:19.305 "ffdhe4096", 00:22:19.305 "ffdhe6144", 00:22:19.305 "ffdhe8192" 00:22:19.305 ] 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "bdev_nvme_set_hotplug", 00:22:19.305 "params": { 00:22:19.305 "period_us": 100000, 00:22:19.305 "enable": false 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "bdev_malloc_create", 00:22:19.305 "params": { 00:22:19.305 "name": "malloc0", 00:22:19.305 "num_blocks": 8192, 00:22:19.305 "block_size": 4096, 00:22:19.305 "physical_block_size": 4096, 00:22:19.305 "uuid": "a908f2d2-e739-40b0-a152-5ece42ec3c20", 00:22:19.305 "optimal_io_boundary": 0 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "bdev_wait_for_examine" 00:22:19.305 } 00:22:19.305 ] 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "subsystem": "nbd", 00:22:19.305 "config": [] 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "subsystem": "scheduler", 00:22:19.305 "config": [ 00:22:19.305 { 00:22:19.305 "method": "framework_set_scheduler", 00:22:19.305 "params": { 00:22:19.305 "name": "static" 00:22:19.305 } 00:22:19.305 } 00:22:19.305 ] 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "subsystem": "nvmf", 00:22:19.305 "config": [ 00:22:19.305 { 00:22:19.305 "method": "nvmf_set_config", 00:22:19.305 "params": { 00:22:19.305 "discovery_filter": "match_any", 00:22:19.305 "admin_cmd_passthru": { 00:22:19.305 "identify_ctrlr": false 00:22:19.305 } 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "nvmf_set_max_subsystems", 00:22:19.305 "params": { 00:22:19.305 "max_subsystems": 1024 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "nvmf_set_crdt", 00:22:19.305 "params": { 00:22:19.305 "crdt1": 0, 00:22:19.305 "crdt2": 0, 00:22:19.305 "crdt3": 0 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "nvmf_create_transport", 00:22:19.305 "params": { 00:22:19.305 "trtype": "TCP", 00:22:19.305 "max_queue_depth": 128, 00:22:19.305 "max_io_qpairs_per_ctrlr": 127, 00:22:19.305 "in_capsule_data_size": 4096, 00:22:19.305 "max_io_size": 131072, 00:22:19.305 "io_unit_size": 131072, 00:22:19.305 "max_aq_depth": 128, 00:22:19.305 "num_shared_buffers": 511, 00:22:19.305 "buf_cache_size": 4294967295, 00:22:19.305 "dif_insert_or_strip": false, 00:22:19.305 "zcopy": false, 00:22:19.305 "c2h_success": false, 00:22:19.305 "sock_priority": 0, 00:22:19.305 "abort_timeout_sec": 1, 00:22:19.305 "ack_timeout": 0, 00:22:19.305 "data_wr_pool_size": 0 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "nvmf_create_subsystem", 00:22:19.305 "params": { 00:22:19.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.305 "allow_any_host": false, 00:22:19.305 "serial_number": "00000000000000000000", 00:22:19.305 "model_number": "SPDK bdev Controller", 00:22:19.305 "max_namespaces": 32, 00:22:19.305 "min_cntlid": 1, 00:22:19.305 "max_cntlid": 65519, 00:22:19.305 "ana_reporting": false 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "nvmf_subsystem_add_host", 00:22:19.305 "params": { 00:22:19.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.305 "host": "nqn.2016-06.io.spdk:host1", 00:22:19.305 "psk": "key0" 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "nvmf_subsystem_add_ns", 00:22:19.305 "params": { 00:22:19.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.305 "namespace": { 00:22:19.305 "nsid": 1, 00:22:19.305 "bdev_name": "malloc0", 00:22:19.305 "nguid": "A908F2D2E73940B0A1525ECE42EC3C20", 00:22:19.305 "uuid": "a908f2d2-e739-40b0-a152-5ece42ec3c20", 00:22:19.305 "no_auto_visible": false 00:22:19.305 } 00:22:19.305 } 00:22:19.305 }, 00:22:19.305 { 00:22:19.305 "method": "nvmf_subsystem_add_listener", 00:22:19.305 "params": { 00:22:19.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.305 "listen_address": { 00:22:19.305 "trtype": "TCP", 00:22:19.305 "adrfam": "IPv4", 00:22:19.305 "traddr": "10.0.0.2", 00:22:19.305 "trsvcid": "4420" 00:22:19.305 }, 00:22:19.305 "secure_channel": true 00:22:19.305 } 00:22:19.305 } 00:22:19.305 ] 00:22:19.305 } 00:22:19.305 ] 00:22:19.305 }' 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1943529 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1943529 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1943529 ']' 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.305 09:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.305 [2024-07-15 09:55:35.965781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:19.305 [2024-07-15 09:55:35.965868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.305 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.305 [2024-07-15 09:55:36.002488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:19.305 [2024-07-15 09:55:36.034705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.564 [2024-07-15 09:55:36.123445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.564 [2024-07-15 09:55:36.123514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.564 [2024-07-15 09:55:36.123528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.564 [2024-07-15 09:55:36.123553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.564 [2024-07-15 09:55:36.123562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.564 [2024-07-15 09:55:36.123634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.822 [2024-07-15 09:55:36.365437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.822 [2024-07-15 09:55:36.397460] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.822 [2024-07-15 09:55:36.409047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1943683 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1943683 /var/tmp/bdevperf.sock 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1943683 ']' 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:20.389 "subsystems": [ 00:22:20.389 { 00:22:20.389 "subsystem": "keyring", 00:22:20.389 "config": [ 00:22:20.389 { 00:22:20.389 "method": "keyring_file_add_key", 00:22:20.389 "params": { 00:22:20.389 "name": "key0", 00:22:20.389 "path": "/tmp/tmp.QridQIgu4A" 00:22:20.389 } 00:22:20.389 } 00:22:20.389 ] 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "subsystem": "iobuf", 00:22:20.389 "config": [ 00:22:20.389 { 00:22:20.389 "method": "iobuf_set_options", 00:22:20.389 "params": { 00:22:20.389 "small_pool_count": 8192, 00:22:20.389 "large_pool_count": 1024, 00:22:20.389 "small_bufsize": 8192, 00:22:20.389 "large_bufsize": 135168 00:22:20.389 } 00:22:20.389 } 00:22:20.389 ] 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "subsystem": "sock", 00:22:20.389 "config": [ 00:22:20.389 { 00:22:20.389 "method": "sock_set_default_impl", 00:22:20.389 "params": { 00:22:20.389 "impl_name": "posix" 00:22:20.389 } 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "method": "sock_impl_set_options", 00:22:20.389 "params": { 00:22:20.389 "impl_name": "ssl", 00:22:20.389 "recv_buf_size": 4096, 00:22:20.389 "send_buf_size": 4096, 00:22:20.389 "enable_recv_pipe": true, 00:22:20.389 "enable_quickack": false, 00:22:20.389 "enable_placement_id": 0, 00:22:20.389 "enable_zerocopy_send_server": true, 00:22:20.389 "enable_zerocopy_send_client": false, 00:22:20.389 "zerocopy_threshold": 0, 00:22:20.389 "tls_version": 0, 00:22:20.389 "enable_ktls": false 00:22:20.389 } 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "method": "sock_impl_set_options", 00:22:20.389 "params": { 00:22:20.389 "impl_name": "posix", 00:22:20.389 "recv_buf_size": 2097152, 00:22:20.389 "send_buf_size": 2097152, 00:22:20.389 "enable_recv_pipe": true, 00:22:20.389 "enable_quickack": false, 00:22:20.389 "enable_placement_id": 0, 00:22:20.389 "enable_zerocopy_send_server": true, 00:22:20.389 "enable_zerocopy_send_client": false, 00:22:20.389 "zerocopy_threshold": 0, 00:22:20.389 "tls_version": 0, 00:22:20.389 "enable_ktls": false 00:22:20.389 } 00:22:20.389 } 00:22:20.389 ] 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "subsystem": "vmd", 00:22:20.389 "config": [] 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "subsystem": "accel", 00:22:20.389 "config": [ 00:22:20.389 { 00:22:20.389 "method": "accel_set_options", 00:22:20.389 "params": { 00:22:20.389 "small_cache_size": 128, 00:22:20.389 "large_cache_size": 16, 00:22:20.389 "task_count": 2048, 00:22:20.389 "sequence_count": 2048, 00:22:20.389 "buf_count": 2048 00:22:20.389 } 00:22:20.389 } 00:22:20.389 ] 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "subsystem": "bdev", 00:22:20.389 "config": [ 00:22:20.389 { 00:22:20.389 "method": "bdev_set_options", 00:22:20.389 "params": { 00:22:20.389 "bdev_io_pool_size": 65535, 00:22:20.389 "bdev_io_cache_size": 256, 00:22:20.389 "bdev_auto_examine": true, 00:22:20.389 "iobuf_small_cache_size": 128, 00:22:20.389 "iobuf_large_cache_size": 16 00:22:20.389 } 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "method": "bdev_raid_set_options", 00:22:20.389 "params": { 00:22:20.389 "process_window_size_kb": 1024 00:22:20.389 } 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "method": "bdev_iscsi_set_options", 00:22:20.389 "params": { 00:22:20.389 "timeout_sec": 30 00:22:20.389 } 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "method": "bdev_nvme_set_options", 00:22:20.389 "params": { 00:22:20.389 "action_on_timeout": "none", 00:22:20.389 "timeout_us": 0, 00:22:20.389 "timeout_admin_us": 0, 00:22:20.389 "keep_alive_timeout_ms": 10000, 00:22:20.389 "arbitration_burst": 0, 00:22:20.389 "low_priority_weight": 0, 00:22:20.389 "medium_priority_weight": 0, 00:22:20.389 "high_priority_weight": 0, 00:22:20.389 "nvme_adminq_poll_period_us": 10000, 00:22:20.389 "nvme_ioq_poll_period_us": 0, 00:22:20.389 "io_queue_requests": 512, 00:22:20.389 "delay_cmd_submit": true, 00:22:20.389 "transport_retry_count": 4, 00:22:20.389 "bdev_retry_count": 3, 00:22:20.389 "transport_ack_timeout": 0, 00:22:20.389 "ctrlr_loss_timeout_sec": 0, 00:22:20.389 "reconnect_delay_sec": 0, 00:22:20.389 "fast_io_fail_timeout_sec": 0, 00:22:20.389 "disable_auto_failback": false, 00:22:20.389 "generate_uuids": false, 00:22:20.389 "transport_tos": 0, 00:22:20.389 "nvme_error_stat": false, 00:22:20.389 "rdma_srq_size": 0, 00:22:20.389 "io_path_stat": false, 00:22:20.389 "allow_accel_sequence": false, 00:22:20.389 "rdma_max_cq_size": 0, 00:22:20.389 "rdma_cm_event_timeout_ms": 0, 00:22:20.389 "dhchap_digests": [ 00:22:20.389 "sha256", 00:22:20.389 "sha384", 00:22:20.389 "sha512" 00:22:20.389 ], 00:22:20.389 "dhchap_dhgroups": [ 00:22:20.389 "null", 00:22:20.389 "ffdhe2048", 00:22:20.389 "ffdhe3072", 00:22:20.389 "ffdhe4096", 00:22:20.389 "ffdhe6144", 00:22:20.389 "ffdhe8192" 00:22:20.389 ] 00:22:20.389 } 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "method": "bdev_nvme_attach_controller", 00:22:20.389 "params": { 00:22:20.389 "name": "nvme0", 00:22:20.389 "trtype": "TCP", 00:22:20.389 "adrfam": "IPv4", 00:22:20.389 "traddr": "10.0.0.2", 00:22:20.389 "trsvcid": "4420", 00:22:20.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.389 "prchk_reftag": false, 00:22:20.389 "prchk_guard": false, 00:22:20.389 "ctrlr_loss_timeout_sec": 0, 00:22:20.389 "reconnect_delay_sec": 0, 00:22:20.389 "fast_io_fail_timeout_sec": 0, 00:22:20.389 "psk": "key0", 00:22:20.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.389 "hdgst": false, 00:22:20.389 "ddgst": false 00:22:20.389 } 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "method": "bdev_nvme_set_hotplug", 00:22:20.389 "params": { 00:22:20.389 "period_us": 100000, 00:22:20.389 "enable": false 00:22:20.389 } 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "method": "bdev_enable_histogram", 00:22:20.389 "params": { 00:22:20.389 "name": "nvme0n1", 00:22:20.389 "enable": true 00:22:20.389 } 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "method": "bdev_wait_for_examine" 00:22:20.389 } 00:22:20.389 ] 00:22:20.389 }, 00:22:20.389 { 00:22:20.389 "subsystem": "nbd", 00:22:20.389 "config": [] 00:22:20.389 } 00:22:20.389 ] 00:22:20.389 }' 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.389 09:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.389 [2024-07-15 09:55:36.982591] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:20.389 [2024-07-15 09:55:36.982667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943683 ] 00:22:20.389 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.390 [2024-07-15 09:55:37.013553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:20.390 [2024-07-15 09:55:37.045629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.390 [2024-07-15 09:55:37.136409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.648 [2024-07-15 09:55:37.318011] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.214 09:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.214 09:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:21.214 09:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:21.214 09:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:21.472 09:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.472 09:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:21.730 Running I/O for 1 seconds... 00:22:22.660 00:22:22.660 Latency(us) 00:22:22.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.660 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:22.660 Verification LBA range: start 0x0 length 0x2000 00:22:22.661 nvme0n1 : 1.04 3193.12 12.47 0.00 0.00 39417.98 6310.87 62914.56 00:22:22.661 =================================================================================================================== 00:22:22.661 Total : 3193.12 12.47 0.00 0.00 39417.98 6310.87 62914.56 00:22:22.661 0 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:22.661 nvmf_trace.0 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1943683 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1943683 ']' 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1943683 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1943683 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1943683' 00:22:22.661 killing process with pid 1943683 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1943683 00:22:22.661 Received shutdown signal, test time was about 1.000000 seconds 00:22:22.661 00:22:22.661 Latency(us) 00:22:22.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.661 =================================================================================================================== 00:22:22.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.661 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1943683 00:22:22.918 09:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:22.918 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:22.918 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:22.918 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:22.918 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:22.918 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.918 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:22.918 rmmod nvme_tcp 00:22:22.918 rmmod nvme_fabrics 00:22:22.918 rmmod nvme_keyring 00:22:22.918 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1943529 ']' 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1943529 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1943529 ']' 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1943529 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1943529 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1943529' 00:22:23.176 killing process with pid 1943529 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1943529 00:22:23.176 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1943529 00:22:23.435 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.435 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.435 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.435 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.435 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.435 09:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.435 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.435 09:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.384 09:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.385 09:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.phQFM9ffaP /tmp/tmp.c1h7sIslt9 /tmp/tmp.QridQIgu4A 00:22:25.385 00:22:25.385 real 1m18.825s 00:22:25.385 user 2m7.229s 00:22:25.385 sys 0m26.487s 00:22:25.385 09:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:25.385 09:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.385 ************************************ 00:22:25.385 END TEST nvmf_tls 00:22:25.385 ************************************ 00:22:25.385 09:55:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:25.385 09:55:42 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:25.385 09:55:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:25.385 09:55:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:25.385 09:55:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:25.385 ************************************ 00:22:25.385 START TEST nvmf_fips 00:22:25.385 ************************************ 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:25.385 * Looking for test storage... 00:22:25.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:25.385 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:25.386 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:25.386 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:25.386 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:25.386 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:25.386 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:25.386 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:25.386 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:25.644 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:25.644 Error setting digest 00:22:25.645 00F29C7D2A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:25.645 00F29C7D2A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.645 09:55:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:27.545 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:27.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.545 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:27.546 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:27.546 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.546 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:27.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:22:27.804 00:22:27.804 --- 10.0.0.2 ping statistics --- 00:22:27.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.804 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:22:27.804 00:22:27.804 --- 10.0.0.1 ping statistics --- 00:22:27.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.804 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1945922 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1945922 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1945922 ']' 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.804 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:27.804 [2024-07-15 09:55:44.449831] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:27.804 [2024-07-15 09:55:44.449904] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.804 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.804 [2024-07-15 09:55:44.485802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:27.804 [2024-07-15 09:55:44.516032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.062 [2024-07-15 09:55:44.608820] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.062 [2024-07-15 09:55:44.608874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.062 [2024-07-15 09:55:44.608911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.062 [2024-07-15 09:55:44.608932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.062 [2024-07-15 09:55:44.608951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.062 [2024-07-15 09:55:44.608988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:28.062 09:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:28.321 [2024-07-15 09:55:44.987424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.321 [2024-07-15 09:55:45.003422] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.321 [2024-07-15 09:55:45.003686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.321 [2024-07-15 09:55:45.035901] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:28.321 malloc0 00:22:28.321 09:55:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.321 09:55:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1946063 00:22:28.321 09:55:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1946063 /var/tmp/bdevperf.sock 00:22:28.321 09:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1946063 ']' 00:22:28.321 09:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.321 09:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.321 09:55:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.321 09:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.321 09:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.321 09:55:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:28.579 [2024-07-15 09:55:45.128036] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:28.579 [2024-07-15 09:55:45.128115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1946063 ] 00:22:28.579 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.579 [2024-07-15 09:55:45.161146] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:28.579 [2024-07-15 09:55:45.189359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.579 [2024-07-15 09:55:45.279630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.513 09:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.513 09:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:29.513 09:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:29.513 [2024-07-15 09:55:46.286470] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.513 [2024-07-15 09:55:46.286579] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:29.771 TLSTESTn1 00:22:29.771 09:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:29.771 Running I/O for 10 seconds... 00:22:41.966 00:22:41.966 Latency(us) 00:22:41.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.966 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.966 Verification LBA range: start 0x0 length 0x2000 00:22:41.966 TLSTESTn1 : 10.02 3184.52 12.44 0.00 0.00 40116.71 8446.86 60196.03 00:22:41.966 =================================================================================================================== 00:22:41.966 Total : 3184.52 12.44 0.00 0.00 40116.71 8446.86 60196.03 00:22:41.966 0 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:41.966 nvmf_trace.0 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1946063 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1946063 ']' 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1946063 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1946063 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1946063' 00:22:41.966 killing process with pid 1946063 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1946063 00:22:41.966 Received shutdown signal, test time was about 10.000000 seconds 00:22:41.966 00:22:41.966 Latency(us) 00:22:41.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.966 =================================================================================================================== 00:22:41.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.966 [2024-07-15 09:55:56.668049] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1946063 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.966 rmmod nvme_tcp 00:22:41.966 rmmod nvme_fabrics 00:22:41.966 rmmod nvme_keyring 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1945922 ']' 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1945922 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1945922 ']' 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1945922 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1945922 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:41.966 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:41.967 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1945922' 00:22:41.967 killing process with pid 1945922 00:22:41.967 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1945922 00:22:41.967 [2024-07-15 09:55:56.986726] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:41.967 09:55:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1945922 00:22:41.967 09:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.967 09:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.967 09:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.967 09:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.967 09:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.967 09:55:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.967 09:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.967 09:55:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.532 09:55:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.532 09:55:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:42.532 00:22:42.532 real 0m17.217s 00:22:42.532 user 0m19.463s 00:22:42.532 sys 0m6.811s 00:22:42.532 09:55:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.532 09:55:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:42.532 ************************************ 00:22:42.532 END TEST nvmf_fips 00:22:42.532 ************************************ 00:22:42.532 09:55:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:42.532 09:55:59 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:22:42.532 09:55:59 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:42.532 09:55:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:42.532 09:55:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.532 09:55:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.791 ************************************ 00:22:42.791 START TEST nvmf_fuzz 00:22:42.791 ************************************ 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:42.791 * Looking for test storage... 00:22:42.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.791 09:55:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:44.691 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.691 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.691 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.691 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.691 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:44.692 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:44.692 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:44.692 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:44.692 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:22:44.692 00:22:44.692 --- 10.0.0.2 ping statistics --- 00:22:44.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.692 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:22:44.692 00:22:44.692 --- 10.0.0.1 ping statistics --- 00:22:44.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.692 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1949321 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1949321 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1949321 ']' 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.692 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.693 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.693 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:44.950 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.950 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:22:44.950 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.950 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.950 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:44.950 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.950 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:44.950 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.950 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:44.950 Malloc0 00:22:44.950 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:44.951 09:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:17.066 Fuzzing completed. Shutting down the fuzz application 00:23:17.066 00:23:17.066 Dumping successful admin opcodes: 00:23:17.066 8, 9, 10, 24, 00:23:17.066 Dumping successful io opcodes: 00:23:17.066 0, 9, 00:23:17.066 NS: 0x200003aeff00 I/O qp, Total commands completed: 474416, total successful commands: 2740, random_seed: 692997696 00:23:17.066 NS: 0x200003aeff00 admin qp, Total commands completed: 58384, total successful commands: 464, random_seed: 2565075008 00:23:17.066 09:56:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:17.066 Fuzzing completed. Shutting down the fuzz application 00:23:17.066 00:23:17.066 Dumping successful admin opcodes: 00:23:17.066 24, 00:23:17.066 Dumping successful io opcodes: 00:23:17.066 00:23:17.066 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3056591732 00:23:17.066 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3056754976 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.066 rmmod nvme_tcp 00:23:17.066 rmmod nvme_fabrics 00:23:17.066 rmmod nvme_keyring 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1949321 ']' 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1949321 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1949321 ']' 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1949321 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1949321 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1949321' 00:23:17.066 killing process with pid 1949321 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1949321 00:23:17.066 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1949321 00:23:17.325 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.325 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:17.325 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:17.326 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.326 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.326 09:56:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.326 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.326 09:56:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.228 09:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.228 09:56:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:19.228 00:23:19.228 real 0m36.667s 00:23:19.228 user 0m50.399s 00:23:19.228 sys 0m15.136s 00:23:19.228 09:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:19.228 09:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:19.228 ************************************ 00:23:19.228 END TEST nvmf_fuzz 00:23:19.228 ************************************ 00:23:19.228 09:56:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:19.228 09:56:36 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:19.228 09:56:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:19.228 09:56:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.228 09:56:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.486 ************************************ 00:23:19.486 START TEST nvmf_multiconnection 00:23:19.486 ************************************ 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:19.486 * Looking for test storage... 00:23:19.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.486 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.487 09:56:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:21.384 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:21.384 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:21.384 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:21.384 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:21.384 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:21.385 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.385 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.385 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.385 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.385 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:21.385 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.385 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.385 09:56:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:21.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:23:21.385 00:23:21.385 --- 10.0.0.2 ping statistics --- 00:23:21.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.385 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:23:21.385 00:23:21.385 --- 10.0.0.1 ping statistics --- 00:23:21.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.385 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1954924 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1954924 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1954924 ']' 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.385 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.385 [2024-07-15 09:56:38.092133] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:21.385 [2024-07-15 09:56:38.092259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.385 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.385 [2024-07-15 09:56:38.131330] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:21.385 [2024-07-15 09:56:38.162264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:21.642 [2024-07-15 09:56:38.252586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.642 [2024-07-15 09:56:38.252636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.642 [2024-07-15 09:56:38.252664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.642 [2024-07-15 09:56:38.252675] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.642 [2024-07-15 09:56:38.252684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.642 [2024-07-15 09:56:38.252778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.642 [2024-07-15 09:56:38.252841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.642 [2024-07-15 09:56:38.252912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.642 [2024-07-15 09:56:38.252915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.642 [2024-07-15 09:56:38.412646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.642 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.900 Malloc1 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.900 [2024-07-15 09:56:38.470004] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.900 Malloc2 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:21.900 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 Malloc3 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 Malloc4 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 Malloc5 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.901 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 Malloc6 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 Malloc7 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 Malloc8 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 Malloc9 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 Malloc10 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 Malloc11 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.160 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:22.417 09:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:22.981 09:56:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:22.981 09:56:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:22.981 09:56:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:22.981 09:56:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:22.981 09:56:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:24.876 09:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:24.876 09:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:24.876 09:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:25.134 09:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:25.134 09:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:25.134 09:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:25.134 09:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.134 09:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:25.698 09:56:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:25.698 09:56:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:25.698 09:56:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:25.698 09:56:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:25.698 09:56:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:27.591 09:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:27.591 09:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:27.591 09:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:27.591 09:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:27.591 09:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:27.591 09:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:27.591 09:56:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.591 09:56:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:28.522 09:56:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:28.522 09:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:28.522 09:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:28.522 09:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:28.522 09:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:30.415 09:56:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:30.415 09:56:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:30.415 09:56:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:23:30.415 09:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:30.415 09:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:30.415 09:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:30.415 09:56:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.415 09:56:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:30.979 09:56:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:30.979 09:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:30.979 09:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:30.979 09:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:30.979 09:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:33.501 09:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:33.501 09:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:33.501 09:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:23:33.501 09:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:33.501 09:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:33.501 09:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:33.501 09:56:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.501 09:56:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:33.756 09:56:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:33.756 09:56:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:33.756 09:56:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:33.756 09:56:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:33.756 09:56:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:35.681 09:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:35.681 09:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:35.681 09:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:23:35.681 09:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:35.681 09:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:35.681 09:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:35.939 09:56:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:35.939 09:56:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:36.503 09:56:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:36.503 09:56:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:36.503 09:56:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:36.503 09:56:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:36.503 09:56:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:39.032 09:56:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:39.032 09:56:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:39.032 09:56:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:23:39.032 09:56:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:39.032 09:56:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:39.032 09:56:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:39.032 09:56:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:39.032 09:56:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:39.288 09:56:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:39.288 09:56:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:39.288 09:56:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:39.288 09:56:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:39.288 09:56:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:41.814 09:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:41.814 09:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:41.814 09:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:23:41.814 09:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:41.814 09:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:41.814 09:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:41.814 09:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.814 09:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:42.378 09:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:42.378 09:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:42.378 09:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:42.378 09:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:42.378 09:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:44.277 09:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:44.277 09:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:44.277 09:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:23:44.277 09:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:44.277 09:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:44.277 09:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:44.277 09:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:44.277 09:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:45.210 09:57:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:45.210 09:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:45.210 09:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:45.210 09:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:45.210 09:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:47.112 09:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:47.112 09:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:47.112 09:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:23:47.112 09:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:47.112 09:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:47.112 09:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:47.112 09:57:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.112 09:57:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:47.678 09:57:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:47.678 09:57:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:47.678 09:57:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:47.678 09:57:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:47.678 09:57:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:50.213 09:57:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:50.213 09:57:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:50.213 09:57:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:23:50.213 09:57:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:50.213 09:57:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:50.213 09:57:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:50.213 09:57:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.213 09:57:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:50.471 09:57:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:50.471 09:57:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:50.471 09:57:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:50.471 09:57:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:50.471 09:57:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:52.998 09:57:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:52.998 09:57:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:52.998 09:57:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:23:52.998 09:57:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:52.998 09:57:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:52.998 09:57:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:52.998 09:57:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:52.998 [global] 00:23:52.998 thread=1 00:23:52.998 invalidate=1 00:23:52.998 rw=read 00:23:52.998 time_based=1 00:23:52.998 runtime=10 00:23:52.998 ioengine=libaio 00:23:52.998 direct=1 00:23:52.998 bs=262144 00:23:52.998 iodepth=64 00:23:52.998 norandommap=1 00:23:52.998 numjobs=1 00:23:52.998 00:23:52.998 [job0] 00:23:52.998 filename=/dev/nvme0n1 00:23:52.998 [job1] 00:23:52.998 filename=/dev/nvme10n1 00:23:52.998 [job2] 00:23:52.998 filename=/dev/nvme1n1 00:23:52.998 [job3] 00:23:52.998 filename=/dev/nvme2n1 00:23:52.998 [job4] 00:23:52.998 filename=/dev/nvme3n1 00:23:52.998 [job5] 00:23:52.999 filename=/dev/nvme4n1 00:23:52.999 [job6] 00:23:52.999 filename=/dev/nvme5n1 00:23:52.999 [job7] 00:23:52.999 filename=/dev/nvme6n1 00:23:52.999 [job8] 00:23:52.999 filename=/dev/nvme7n1 00:23:52.999 [job9] 00:23:52.999 filename=/dev/nvme8n1 00:23:52.999 [job10] 00:23:52.999 filename=/dev/nvme9n1 00:23:52.999 Could not set queue depth (nvme0n1) 00:23:52.999 Could not set queue depth (nvme10n1) 00:23:52.999 Could not set queue depth (nvme1n1) 00:23:52.999 Could not set queue depth (nvme2n1) 00:23:52.999 Could not set queue depth (nvme3n1) 00:23:52.999 Could not set queue depth (nvme4n1) 00:23:52.999 Could not set queue depth (nvme5n1) 00:23:52.999 Could not set queue depth (nvme6n1) 00:23:52.999 Could not set queue depth (nvme7n1) 00:23:52.999 Could not set queue depth (nvme8n1) 00:23:52.999 Could not set queue depth (nvme9n1) 00:23:52.999 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:52.999 fio-3.35 00:23:52.999 Starting 11 threads 00:24:05.241 00:24:05.241 job0: (groupid=0, jobs=1): err= 0: pid=1959781: Mon Jul 15 09:57:20 2024 00:24:05.241 read: IOPS=444, BW=111MiB/s (116MB/s)(1124MiB/10122msec) 00:24:05.241 slat (usec): min=9, max=135553, avg=1858.09, stdev=6490.27 00:24:05.241 clat (msec): min=9, max=337, avg=142.15, stdev=54.70 00:24:05.241 lat (msec): min=9, max=337, avg=144.01, stdev=55.42 00:24:05.241 clat percentiles (msec): 00:24:05.241 | 1.00th=[ 22], 5.00th=[ 59], 10.00th=[ 77], 20.00th=[ 102], 00:24:05.241 | 30.00th=[ 116], 40.00th=[ 127], 50.00th=[ 136], 60.00th=[ 148], 00:24:05.241 | 70.00th=[ 165], 80.00th=[ 184], 90.00th=[ 215], 95.00th=[ 249], 00:24:05.241 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 338], 99.95th=[ 338], 00:24:05.241 | 99.99th=[ 338] 00:24:05.241 bw ( KiB/s): min=71680, max=180736, per=6.29%, avg=113466.70, stdev=29408.68, samples=20 00:24:05.241 iops : min= 280, max= 706, avg=443.20, stdev=114.92, samples=20 00:24:05.241 lat (msec) : 10=0.02%, 20=0.69%, 50=3.09%, 100=15.55%, 250=75.82% 00:24:05.241 lat (msec) : 500=4.83% 00:24:05.241 cpu : usr=0.28%, sys=1.47%, ctx=996, majf=0, minf=4097 00:24:05.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:05.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.241 issued rwts: total=4495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.241 job1: (groupid=0, jobs=1): err= 0: pid=1959782: Mon Jul 15 09:57:20 2024 00:24:05.241 read: IOPS=439, BW=110MiB/s (115MB/s)(1112MiB/10128msec) 00:24:05.241 slat (usec): min=14, max=111606, avg=1720.35, stdev=6398.92 00:24:05.241 clat (msec): min=6, max=341, avg=143.92, stdev=60.69 00:24:05.241 lat (msec): min=6, max=341, avg=145.64, stdev=61.67 00:24:05.241 clat percentiles (msec): 00:24:05.241 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 65], 20.00th=[ 93], 00:24:05.241 | 30.00th=[ 109], 40.00th=[ 127], 50.00th=[ 142], 60.00th=[ 157], 00:24:05.241 | 70.00th=[ 174], 80.00th=[ 199], 90.00th=[ 228], 95.00th=[ 253], 00:24:05.241 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 313], 00:24:05.241 | 99.99th=[ 342] 00:24:05.241 bw ( KiB/s): min=57856, max=177152, per=6.22%, avg=112204.80, stdev=33121.16, samples=20 00:24:05.241 iops : min= 226, max= 692, avg=438.30, stdev=129.38, samples=20 00:24:05.241 lat (msec) : 10=0.34%, 20=1.60%, 50=3.80%, 100=19.07%, 250=69.87% 00:24:05.241 lat (msec) : 500=5.33% 00:24:05.241 cpu : usr=0.24%, sys=1.63%, ctx=1074, majf=0, minf=4097 00:24:05.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:05.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.241 issued rwts: total=4447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.241 job2: (groupid=0, jobs=1): err= 0: pid=1959783: Mon Jul 15 09:57:20 2024 00:24:05.241 read: IOPS=674, BW=169MiB/s (177MB/s)(1708MiB/10122msec) 00:24:05.241 slat (usec): min=9, max=117658, avg=1217.99, stdev=4617.32 00:24:05.241 clat (usec): min=1286, max=327684, avg=93525.40, stdev=64562.56 00:24:05.241 lat (usec): min=1305, max=358081, avg=94743.39, stdev=65430.15 00:24:05.241 clat percentiles (usec): 00:24:05.241 | 1.00th=[ 1844], 5.00th=[ 9634], 10.00th=[ 33817], 20.00th=[ 52691], 00:24:05.241 | 30.00th=[ 58983], 40.00th=[ 63701], 50.00th=[ 70779], 60.00th=[ 80217], 00:24:05.241 | 70.00th=[ 94897], 80.00th=[143655], 90.00th=[204473], 95.00th=[235930], 00:24:05.241 | 99.00th=[274727], 99.50th=[287310], 99.90th=[316670], 99.95th=[320865], 00:24:05.241 | 99.99th=[329253] 00:24:05.241 bw ( KiB/s): min=74240, max=317952, per=9.60%, avg=173235.20, stdev=80816.87, samples=20 00:24:05.241 iops : min= 290, max= 1242, avg=676.70, stdev=315.69, samples=20 00:24:05.241 lat (msec) : 2=1.07%, 4=1.01%, 10=3.04%, 20=1.96%, 50=9.82% 00:24:05.241 lat (msec) : 100=55.37%, 250=24.81%, 500=2.91% 00:24:05.241 cpu : usr=0.41%, sys=2.02%, ctx=1560, majf=0, minf=4097 00:24:05.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:05.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.241 issued rwts: total=6831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.242 job3: (groupid=0, jobs=1): err= 0: pid=1959784: Mon Jul 15 09:57:20 2024 00:24:05.242 read: IOPS=577, BW=144MiB/s (151MB/s)(1454MiB/10062msec) 00:24:05.242 slat (usec): min=10, max=155325, avg=1021.91, stdev=5473.06 00:24:05.242 clat (usec): min=1063, max=322948, avg=109665.47, stdev=74939.46 00:24:05.242 lat (usec): min=1086, max=329455, avg=110687.38, stdev=75927.56 00:24:05.242 clat percentiles (msec): 00:24:05.242 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 24], 20.00th=[ 47], 00:24:05.242 | 30.00th=[ 61], 40.00th=[ 73], 50.00th=[ 86], 60.00th=[ 106], 00:24:05.242 | 70.00th=[ 146], 80.00th=[ 190], 90.00th=[ 226], 95.00th=[ 251], 00:24:05.242 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 309], 99.95th=[ 321], 00:24:05.242 | 99.99th=[ 321] 00:24:05.242 bw ( KiB/s): min=57856, max=282112, per=8.16%, avg=147225.60, stdev=71735.12, samples=20 00:24:05.242 iops : min= 226, max= 1102, avg=575.10, stdev=280.22, samples=20 00:24:05.242 lat (msec) : 2=0.12%, 4=1.22%, 10=3.77%, 20=3.80%, 50=13.28% 00:24:05.242 lat (msec) : 100=36.00%, 250=36.81%, 500=5.01% 00:24:05.242 cpu : usr=0.22%, sys=1.47%, ctx=1367, majf=0, minf=3721 00:24:05.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:05.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.242 issued rwts: total=5814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.242 job4: (groupid=0, jobs=1): err= 0: pid=1959785: Mon Jul 15 09:57:20 2024 00:24:05.242 read: IOPS=769, BW=192MiB/s (202MB/s)(1930MiB/10031msec) 00:24:05.242 slat (usec): min=13, max=60774, avg=1219.12, stdev=3806.23 00:24:05.242 clat (msec): min=2, max=224, avg=81.88, stdev=39.36 00:24:05.242 lat (msec): min=3, max=224, avg=83.10, stdev=39.96 00:24:05.242 clat percentiles (msec): 00:24:05.242 | 1.00th=[ 12], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 53], 00:24:05.242 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 77], 00:24:05.242 | 70.00th=[ 93], 80.00th=[ 123], 90.00th=[ 146], 95.00th=[ 163], 00:24:05.242 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 197], 99.95th=[ 213], 00:24:05.242 | 99.99th=[ 226] 00:24:05.242 bw ( KiB/s): min=95744, max=297984, per=10.87%, avg=196030.50, stdev=70868.71, samples=20 00:24:05.242 iops : min= 374, max= 1164, avg=765.70, stdev=276.89, samples=20 00:24:05.242 lat (msec) : 4=0.09%, 10=0.54%, 20=1.39%, 50=13.48%, 100=57.03% 00:24:05.242 lat (msec) : 250=27.46% 00:24:05.242 cpu : usr=0.54%, sys=2.55%, ctx=1537, majf=0, minf=4097 00:24:05.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:05.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.242 issued rwts: total=7720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.242 job5: (groupid=0, jobs=1): err= 0: pid=1959786: Mon Jul 15 09:57:20 2024 00:24:05.242 read: IOPS=526, BW=132MiB/s (138MB/s)(1334MiB/10129msec) 00:24:05.242 slat (usec): min=14, max=138957, avg=1830.66, stdev=6055.41 00:24:05.242 clat (msec): min=15, max=360, avg=119.60, stdev=75.48 00:24:05.242 lat (msec): min=15, max=402, avg=121.43, stdev=76.70 00:24:05.242 clat percentiles (msec): 00:24:05.242 | 1.00th=[ 21], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 41], 00:24:05.242 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 126], 00:24:05.242 | 70.00th=[ 163], 80.00th=[ 201], 90.00th=[ 230], 95.00th=[ 259], 00:24:05.242 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 338], 99.95th=[ 342], 00:24:05.242 | 99.99th=[ 363] 00:24:05.242 bw ( KiB/s): min=60416, max=459776, per=7.48%, avg=134937.60, stdev=95929.85, samples=20 00:24:05.242 iops : min= 236, max= 1796, avg=527.10, stdev=374.73, samples=20 00:24:05.242 lat (msec) : 20=0.84%, 50=21.88%, 100=30.93%, 250=39.48%, 500=6.86% 00:24:05.242 cpu : usr=0.37%, sys=1.67%, ctx=1078, majf=0, minf=4097 00:24:05.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:05.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.242 issued rwts: total=5334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.242 job6: (groupid=0, jobs=1): err= 0: pid=1959788: Mon Jul 15 09:57:20 2024 00:24:05.242 read: IOPS=1181, BW=295MiB/s (310MB/s)(2973MiB/10065msec) 00:24:05.242 slat (usec): min=13, max=29216, avg=805.33, stdev=2331.53 00:24:05.242 clat (msec): min=2, max=142, avg=53.32, stdev=23.82 00:24:05.242 lat (msec): min=2, max=142, avg=54.12, stdev=24.14 00:24:05.242 clat percentiles (msec): 00:24:05.242 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 32], 00:24:05.242 | 30.00th=[ 34], 40.00th=[ 44], 50.00th=[ 53], 60.00th=[ 58], 00:24:05.242 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 87], 95.00th=[ 96], 00:24:05.242 | 99.00th=[ 120], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 131], 00:24:05.242 | 99.99th=[ 142] 00:24:05.242 bw ( KiB/s): min=156160, max=520704, per=16.78%, avg=302822.40, stdev=100486.16, samples=20 00:24:05.242 iops : min= 610, max= 2034, avg=1182.90, stdev=392.52, samples=20 00:24:05.242 lat (msec) : 4=0.53%, 10=0.69%, 20=1.78%, 50=43.54%, 100=49.56% 00:24:05.242 lat (msec) : 250=3.89% 00:24:05.242 cpu : usr=0.68%, sys=3.50%, ctx=2245, majf=0, minf=4097 00:24:05.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:05.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.242 issued rwts: total=11892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.242 job7: (groupid=0, jobs=1): err= 0: pid=1959792: Mon Jul 15 09:57:20 2024 00:24:05.242 read: IOPS=626, BW=157MiB/s (164MB/s)(1571MiB/10024msec) 00:24:05.242 slat (usec): min=14, max=96523, avg=1310.45, stdev=4654.81 00:24:05.242 clat (msec): min=9, max=319, avg=100.73, stdev=53.23 00:24:05.242 lat (msec): min=9, max=341, avg=102.04, stdev=53.86 00:24:05.242 clat percentiles (msec): 00:24:05.242 | 1.00th=[ 27], 5.00th=[ 44], 10.00th=[ 55], 20.00th=[ 61], 00:24:05.242 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 94], 00:24:05.242 | 70.00th=[ 127], 80.00th=[ 144], 90.00th=[ 169], 95.00th=[ 194], 00:24:05.242 | 99.00th=[ 279], 99.50th=[ 296], 99.90th=[ 309], 99.95th=[ 313], 00:24:05.242 | 99.99th=[ 321] 00:24:05.242 bw ( KiB/s): min=70656, max=287744, per=8.83%, avg=159232.00, stdev=66294.97, samples=20 00:24:05.242 iops : min= 276, max= 1124, avg=622.00, stdev=258.96, samples=20 00:24:05.242 lat (msec) : 10=0.03%, 20=0.27%, 50=6.59%, 100=54.00%, 250=36.72% 00:24:05.242 lat (msec) : 500=2.39% 00:24:05.242 cpu : usr=0.28%, sys=2.20%, ctx=1419, majf=0, minf=4097 00:24:05.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:05.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.242 issued rwts: total=6283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.242 job8: (groupid=0, jobs=1): err= 0: pid=1959793: Mon Jul 15 09:57:20 2024 00:24:05.242 read: IOPS=736, BW=184MiB/s (193MB/s)(1852MiB/10066msec) 00:24:05.242 slat (usec): min=10, max=113832, avg=1193.42, stdev=3891.33 00:24:05.242 clat (msec): min=2, max=200, avg=85.70, stdev=35.47 00:24:05.242 lat (msec): min=2, max=260, avg=86.89, stdev=35.93 00:24:05.242 clat percentiles (msec): 00:24:05.242 | 1.00th=[ 11], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 58], 00:24:05.242 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 78], 60.00th=[ 86], 00:24:05.242 | 70.00th=[ 100], 80.00th=[ 117], 90.00th=[ 140], 95.00th=[ 157], 00:24:05.242 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 192], 99.95th=[ 197], 00:24:05.242 | 99.99th=[ 201] 00:24:05.242 bw ( KiB/s): min=102400, max=280064, per=10.42%, avg=188057.60, stdev=56515.91, samples=20 00:24:05.242 iops : min= 400, max= 1094, avg=734.60, stdev=220.77, samples=20 00:24:05.242 lat (msec) : 4=0.16%, 10=0.72%, 20=1.11%, 50=5.44%, 100=62.76% 00:24:05.242 lat (msec) : 250=29.82% 00:24:05.242 cpu : usr=0.42%, sys=2.50%, ctx=1586, majf=0, minf=4097 00:24:05.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:05.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.242 issued rwts: total=7409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.242 job9: (groupid=0, jobs=1): err= 0: pid=1959794: Mon Jul 15 09:57:20 2024 00:24:05.242 read: IOPS=404, BW=101MiB/s (106MB/s)(1025MiB/10126msec) 00:24:05.242 slat (usec): min=9, max=145763, avg=1850.80, stdev=7421.63 00:24:05.242 clat (usec): min=1742, max=378874, avg=156135.81, stdev=66251.92 00:24:05.242 lat (usec): min=1765, max=378910, avg=157986.61, stdev=67052.02 00:24:05.242 clat percentiles (msec): 00:24:05.242 | 1.00th=[ 4], 5.00th=[ 56], 10.00th=[ 69], 20.00th=[ 102], 00:24:05.242 | 30.00th=[ 120], 40.00th=[ 134], 50.00th=[ 150], 60.00th=[ 171], 00:24:05.242 | 70.00th=[ 192], 80.00th=[ 218], 90.00th=[ 249], 95.00th=[ 271], 00:24:05.242 | 99.00th=[ 305], 99.50th=[ 313], 99.90th=[ 347], 99.95th=[ 347], 00:24:05.242 | 99.99th=[ 380] 00:24:05.242 bw ( KiB/s): min=57344, max=164352, per=5.73%, avg=103307.95, stdev=28251.50, samples=20 00:24:05.242 iops : min= 224, max= 642, avg=403.50, stdev=110.33, samples=20 00:24:05.242 lat (msec) : 2=0.07%, 4=1.07%, 10=0.88%, 20=0.56%, 50=0.85% 00:24:05.242 lat (msec) : 100=15.66%, 250=71.21%, 500=9.69% 00:24:05.242 cpu : usr=0.16%, sys=1.43%, ctx=1047, majf=0, minf=4097 00:24:05.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:05.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.242 issued rwts: total=4099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.242 job10: (groupid=0, jobs=1): err= 0: pid=1959795: Mon Jul 15 09:57:20 2024 00:24:05.242 read: IOPS=703, BW=176MiB/s (184MB/s)(1765MiB/10031msec) 00:24:05.242 slat (usec): min=14, max=78496, avg=1343.31, stdev=4083.46 00:24:05.242 clat (msec): min=4, max=224, avg=89.55, stdev=37.75 00:24:05.242 lat (msec): min=4, max=224, avg=90.89, stdev=38.30 00:24:05.242 clat percentiles (msec): 00:24:05.242 | 1.00th=[ 11], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 61], 00:24:05.242 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 85], 00:24:05.242 | 70.00th=[ 105], 80.00th=[ 129], 90.00th=[ 148], 95.00th=[ 165], 00:24:05.242 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 203], 99.95th=[ 205], 00:24:05.242 | 99.99th=[ 224] 00:24:05.242 bw ( KiB/s): min=96768, max=259584, per=9.93%, avg=179109.25, stdev=59109.37, samples=20 00:24:05.242 iops : min= 378, max= 1014, avg=699.60, stdev=230.95, samples=20 00:24:05.242 lat (msec) : 10=0.98%, 20=0.35%, 50=3.84%, 100=63.76%, 250=31.07% 00:24:05.242 cpu : usr=0.38%, sys=2.42%, ctx=1453, majf=0, minf=4097 00:24:05.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:05.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:05.243 issued rwts: total=7059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.243 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:05.243 00:24:05.243 Run status group 0 (all jobs): 00:24:05.243 READ: bw=1762MiB/s (1847MB/s), 101MiB/s-295MiB/s (106MB/s-310MB/s), io=17.4GiB (18.7GB), run=10024-10129msec 00:24:05.243 00:24:05.243 Disk stats (read/write): 00:24:05.243 nvme0n1: ios=8809/0, merge=0/0, ticks=1224973/0, in_queue=1224973, util=97.17% 00:24:05.243 nvme10n1: ios=8712/0, merge=0/0, ticks=1227763/0, in_queue=1227763, util=97.39% 00:24:05.243 nvme1n1: ios=13526/0, merge=0/0, ticks=1235131/0, in_queue=1235131, util=97.65% 00:24:05.243 nvme2n1: ios=11344/0, merge=0/0, ticks=1243860/0, in_queue=1243860, util=97.80% 00:24:05.243 nvme3n1: ios=15178/0, merge=0/0, ticks=1238597/0, in_queue=1238597, util=97.84% 00:24:05.243 nvme4n1: ios=10475/0, merge=0/0, ticks=1224330/0, in_queue=1224330, util=98.18% 00:24:05.243 nvme5n1: ios=23531/0, merge=0/0, ticks=1239167/0, in_queue=1239167, util=98.34% 00:24:05.243 nvme6n1: ios=12341/0, merge=0/0, ticks=1237779/0, in_queue=1237779, util=98.46% 00:24:05.243 nvme7n1: ios=14571/0, merge=0/0, ticks=1239092/0, in_queue=1239092, util=98.91% 00:24:05.243 nvme8n1: ios=8071/0, merge=0/0, ticks=1235525/0, in_queue=1235525, util=99.08% 00:24:05.243 nvme9n1: ios=13837/0, merge=0/0, ticks=1236234/0, in_queue=1236234, util=99.21% 00:24:05.243 09:57:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:05.243 [global] 00:24:05.243 thread=1 00:24:05.243 invalidate=1 00:24:05.243 rw=randwrite 00:24:05.243 time_based=1 00:24:05.243 runtime=10 00:24:05.243 ioengine=libaio 00:24:05.243 direct=1 00:24:05.243 bs=262144 00:24:05.243 iodepth=64 00:24:05.243 norandommap=1 00:24:05.243 numjobs=1 00:24:05.243 00:24:05.243 [job0] 00:24:05.243 filename=/dev/nvme0n1 00:24:05.243 [job1] 00:24:05.243 filename=/dev/nvme10n1 00:24:05.243 [job2] 00:24:05.243 filename=/dev/nvme1n1 00:24:05.243 [job3] 00:24:05.243 filename=/dev/nvme2n1 00:24:05.243 [job4] 00:24:05.243 filename=/dev/nvme3n1 00:24:05.243 [job5] 00:24:05.243 filename=/dev/nvme4n1 00:24:05.243 [job6] 00:24:05.243 filename=/dev/nvme5n1 00:24:05.243 [job7] 00:24:05.243 filename=/dev/nvme6n1 00:24:05.243 [job8] 00:24:05.243 filename=/dev/nvme7n1 00:24:05.243 [job9] 00:24:05.243 filename=/dev/nvme8n1 00:24:05.243 [job10] 00:24:05.243 filename=/dev/nvme9n1 00:24:05.243 Could not set queue depth (nvme0n1) 00:24:05.243 Could not set queue depth (nvme10n1) 00:24:05.243 Could not set queue depth (nvme1n1) 00:24:05.243 Could not set queue depth (nvme2n1) 00:24:05.243 Could not set queue depth (nvme3n1) 00:24:05.243 Could not set queue depth (nvme4n1) 00:24:05.243 Could not set queue depth (nvme5n1) 00:24:05.243 Could not set queue depth (nvme6n1) 00:24:05.243 Could not set queue depth (nvme7n1) 00:24:05.243 Could not set queue depth (nvme8n1) 00:24:05.243 Could not set queue depth (nvme9n1) 00:24:05.243 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:05.243 fio-3.35 00:24:05.243 Starting 11 threads 00:24:15.264 00:24:15.264 job0: (groupid=0, jobs=1): err= 0: pid=1960838: Mon Jul 15 09:57:30 2024 00:24:15.264 write: IOPS=428, BW=107MiB/s (112MB/s)(1079MiB/10071msec); 0 zone resets 00:24:15.264 slat (usec): min=21, max=68196, avg=1881.25, stdev=4729.14 00:24:15.264 clat (usec): min=1701, max=566454, avg=147306.19, stdev=74983.48 00:24:15.264 lat (msec): min=2, max=566, avg=149.19, stdev=75.96 00:24:15.264 clat percentiles (msec): 00:24:15.264 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 66], 20.00th=[ 102], 00:24:15.264 | 30.00th=[ 117], 40.00th=[ 128], 50.00th=[ 140], 60.00th=[ 155], 00:24:15.264 | 70.00th=[ 169], 80.00th=[ 192], 90.00th=[ 230], 95.00th=[ 259], 00:24:15.264 | 99.00th=[ 481], 99.50th=[ 518], 99.90th=[ 558], 99.95th=[ 567], 00:24:15.264 | 99.99th=[ 567] 00:24:15.264 bw ( KiB/s): min=48128, max=187392, per=7.83%, avg=108913.70, stdev=36974.94, samples=20 00:24:15.264 iops : min= 188, max= 732, avg=425.40, stdev=144.43, samples=20 00:24:15.264 lat (msec) : 2=0.02%, 4=0.14%, 10=1.62%, 20=2.27%, 50=4.63% 00:24:15.264 lat (msec) : 100=10.68%, 250=74.15%, 500=5.81%, 750=0.67% 00:24:15.264 cpu : usr=1.43%, sys=1.52%, ctx=2049, majf=0, minf=1 00:24:15.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:24:15.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.264 issued rwts: total=0,4317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.264 job1: (groupid=0, jobs=1): err= 0: pid=1960851: Mon Jul 15 09:57:30 2024 00:24:15.264 write: IOPS=666, BW=167MiB/s (175MB/s)(1698MiB/10188msec); 0 zone resets 00:24:15.264 slat (usec): min=17, max=100225, avg=962.70, stdev=3509.73 00:24:15.264 clat (usec): min=1422, max=593680, avg=94975.64, stdev=80882.01 00:24:15.264 lat (usec): min=1446, max=601727, avg=95938.33, stdev=81723.20 00:24:15.264 clat percentiles (msec): 00:24:15.264 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 20], 20.00th=[ 41], 00:24:15.264 | 30.00th=[ 48], 40.00th=[ 59], 50.00th=[ 74], 60.00th=[ 88], 00:24:15.264 | 70.00th=[ 105], 80.00th=[ 140], 90.00th=[ 209], 95.00th=[ 249], 00:24:15.264 | 99.00th=[ 422], 99.50th=[ 493], 99.90th=[ 575], 99.95th=[ 575], 00:24:15.264 | 99.99th=[ 592] 00:24:15.264 bw ( KiB/s): min=38912, max=317952, per=12.38%, avg=172222.35, stdev=73412.91, samples=20 00:24:15.264 iops : min= 152, max= 1242, avg=672.70, stdev=286.81, samples=20 00:24:15.264 lat (msec) : 2=0.10%, 4=0.96%, 10=4.30%, 20=4.99%, 50=24.45% 00:24:15.264 lat (msec) : 100=34.09%, 250=26.24%, 500=4.37%, 750=0.49% 00:24:15.264 cpu : usr=2.10%, sys=2.39%, ctx=3959, majf=0, minf=1 00:24:15.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:15.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.264 issued rwts: total=0,6790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.264 job2: (groupid=0, jobs=1): err= 0: pid=1960884: Mon Jul 15 09:57:30 2024 00:24:15.264 write: IOPS=422, BW=106MiB/s (111MB/s)(1077MiB/10203msec); 0 zone resets 00:24:15.264 slat (usec): min=15, max=122537, avg=1623.27, stdev=5155.49 00:24:15.264 clat (usec): min=1780, max=690244, avg=149911.11, stdev=96807.46 00:24:15.264 lat (usec): min=1830, max=690318, avg=151534.38, stdev=98178.12 00:24:15.264 clat percentiles (msec): 00:24:15.264 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 38], 20.00th=[ 65], 00:24:15.264 | 30.00th=[ 93], 40.00th=[ 121], 50.00th=[ 142], 60.00th=[ 169], 00:24:15.264 | 70.00th=[ 197], 80.00th=[ 218], 90.00th=[ 247], 95.00th=[ 275], 00:24:15.264 | 99.00th=[ 592], 99.50th=[ 659], 99.90th=[ 684], 99.95th=[ 693], 00:24:15.264 | 99.99th=[ 693] 00:24:15.264 bw ( KiB/s): min=36864, max=186368, per=7.81%, avg=108638.35, stdev=42572.46, samples=20 00:24:15.264 iops : min= 144, max= 728, avg=424.35, stdev=166.27, samples=20 00:24:15.264 lat (msec) : 2=0.05%, 4=0.49%, 10=1.65%, 20=2.00%, 50=9.89% 00:24:15.264 lat (msec) : 100=18.18%, 250=58.70%, 500=7.78%, 750=1.28% 00:24:15.264 cpu : usr=1.25%, sys=1.41%, ctx=2492, majf=0, minf=1 00:24:15.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:24:15.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.264 issued rwts: total=0,4307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.264 job3: (groupid=0, jobs=1): err= 0: pid=1960917: Mon Jul 15 09:57:30 2024 00:24:15.264 write: IOPS=460, BW=115MiB/s (121MB/s)(1175MiB/10197msec); 0 zone resets 00:24:15.264 slat (usec): min=17, max=38089, avg=1789.97, stdev=4129.18 00:24:15.264 clat (usec): min=980, max=404288, avg=136960.69, stdev=72000.47 00:24:15.264 lat (usec): min=1016, max=404316, avg=138750.66, stdev=72934.46 00:24:15.264 clat percentiles (msec): 00:24:15.264 | 1.00th=[ 7], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 55], 00:24:15.264 | 30.00th=[ 79], 40.00th=[ 118], 50.00th=[ 150], 60.00th=[ 171], 00:24:15.264 | 70.00th=[ 188], 80.00th=[ 203], 90.00th=[ 224], 95.00th=[ 241], 00:24:15.264 | 99.00th=[ 264], 99.50th=[ 321], 99.90th=[ 393], 99.95th=[ 393], 00:24:15.264 | 99.99th=[ 405] 00:24:15.264 bw ( KiB/s): min=69632, max=263680, per=8.53%, avg=118664.60, stdev=61480.07, samples=20 00:24:15.264 iops : min= 272, max= 1030, avg=463.50, stdev=240.18, samples=20 00:24:15.264 lat (usec) : 1000=0.02% 00:24:15.264 lat (msec) : 2=0.15%, 4=0.43%, 10=1.62%, 20=0.79%, 50=15.73% 00:24:15.264 lat (msec) : 100=17.45%, 250=60.37%, 500=3.45% 00:24:15.264 cpu : usr=1.39%, sys=1.53%, ctx=1931, majf=0, minf=1 00:24:15.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:15.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.264 issued rwts: total=0,4699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.264 job4: (groupid=0, jobs=1): err= 0: pid=1960924: Mon Jul 15 09:57:30 2024 00:24:15.264 write: IOPS=503, BW=126MiB/s (132MB/s)(1273MiB/10110msec); 0 zone resets 00:24:15.264 slat (usec): min=17, max=157196, avg=1386.22, stdev=5508.17 00:24:15.264 clat (usec): min=1016, max=568822, avg=125622.80, stdev=92591.04 00:24:15.264 lat (usec): min=1065, max=568862, avg=127009.02, stdev=93569.01 00:24:15.264 clat percentiles (msec): 00:24:15.264 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 17], 20.00th=[ 37], 00:24:15.264 | 30.00th=[ 59], 40.00th=[ 86], 50.00th=[ 120], 60.00th=[ 146], 00:24:15.264 | 70.00th=[ 178], 80.00th=[ 205], 90.00th=[ 236], 95.00th=[ 264], 00:24:15.264 | 99.00th=[ 456], 99.50th=[ 506], 99.90th=[ 567], 99.95th=[ 567], 00:24:15.264 | 99.99th=[ 567] 00:24:15.264 bw ( KiB/s): min=54272, max=244736, per=9.25%, avg=128708.15, stdev=58788.71, samples=20 00:24:15.264 iops : min= 212, max= 956, avg=502.75, stdev=229.63, samples=20 00:24:15.264 lat (msec) : 2=0.57%, 4=1.73%, 10=4.56%, 20=5.66%, 50=13.65% 00:24:15.264 lat (msec) : 100=20.33%, 250=46.72%, 500=6.21%, 750=0.57% 00:24:15.264 cpu : usr=1.55%, sys=1.74%, ctx=3021, majf=0, minf=1 00:24:15.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:15.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.264 issued rwts: total=0,5090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.264 job5: (groupid=0, jobs=1): err= 0: pid=1960971: Mon Jul 15 09:57:30 2024 00:24:15.264 write: IOPS=591, BW=148MiB/s (155MB/s)(1495MiB/10114msec); 0 zone resets 00:24:15.264 slat (usec): min=16, max=60304, avg=796.06, stdev=2826.74 00:24:15.264 clat (usec): min=1311, max=628097, avg=107396.19, stdev=77535.01 00:24:15.264 lat (usec): min=1352, max=628186, avg=108192.25, stdev=78066.10 00:24:15.264 clat percentiles (msec): 00:24:15.264 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 27], 20.00th=[ 45], 00:24:15.264 | 30.00th=[ 64], 40.00th=[ 79], 50.00th=[ 94], 60.00th=[ 114], 00:24:15.264 | 70.00th=[ 132], 80.00th=[ 150], 90.00th=[ 201], 95.00th=[ 243], 00:24:15.264 | 99.00th=[ 393], 99.50th=[ 523], 99.90th=[ 609], 99.95th=[ 617], 00:24:15.264 | 99.99th=[ 625] 00:24:15.264 bw ( KiB/s): min=61952, max=283648, per=10.89%, avg=151475.20, stdev=57126.39, samples=20 00:24:15.264 iops : min= 242, max= 1108, avg=591.70, stdev=223.15, samples=20 00:24:15.264 lat (msec) : 2=0.17%, 4=0.40%, 10=2.22%, 20=2.96%, 50=17.66% 00:24:15.264 lat (msec) : 100=29.87%, 250=42.37%, 500=3.81%, 750=0.54% 00:24:15.264 cpu : usr=1.59%, sys=2.04%, ctx=4272, majf=0, minf=1 00:24:15.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:15.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.264 issued rwts: total=0,5980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.264 job6: (groupid=0, jobs=1): err= 0: pid=1960982: Mon Jul 15 09:57:30 2024 00:24:15.264 write: IOPS=600, BW=150MiB/s (157MB/s)(1511MiB/10067msec); 0 zone resets 00:24:15.264 slat (usec): min=23, max=171200, avg=936.19, stdev=3912.78 00:24:15.264 clat (usec): min=1894, max=381562, avg=105585.47, stdev=67429.29 00:24:15.264 lat (usec): min=1937, max=381672, avg=106521.66, stdev=68039.16 00:24:15.264 clat percentiles (msec): 00:24:15.264 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 35], 20.00th=[ 44], 00:24:15.264 | 30.00th=[ 54], 40.00th=[ 75], 50.00th=[ 90], 60.00th=[ 114], 00:24:15.264 | 70.00th=[ 142], 80.00th=[ 167], 90.00th=[ 205], 95.00th=[ 226], 00:24:15.264 | 99.00th=[ 288], 99.50th=[ 305], 99.90th=[ 372], 99.95th=[ 380], 00:24:15.264 | 99.99th=[ 380] 00:24:15.264 bw ( KiB/s): min=70144, max=331264, per=11.01%, avg=153095.40, stdev=65051.31, samples=20 00:24:15.264 iops : min= 274, max= 1294, avg=598.00, stdev=254.09, samples=20 00:24:15.264 lat (msec) : 2=0.03%, 4=0.03%, 10=0.94%, 20=3.31%, 50=24.87% 00:24:15.264 lat (msec) : 100=25.31%, 250=42.80%, 500=2.70% 00:24:15.265 cpu : usr=2.06%, sys=2.16%, ctx=3751, majf=0, minf=1 00:24:15.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:15.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.265 issued rwts: total=0,6044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.265 job7: (groupid=0, jobs=1): err= 0: pid=1960985: Mon Jul 15 09:57:30 2024 00:24:15.265 write: IOPS=382, BW=95.5MiB/s (100MB/s)(974MiB/10189msec); 0 zone resets 00:24:15.265 slat (usec): min=25, max=135189, avg=2403.70, stdev=5879.69 00:24:15.265 clat (msec): min=10, max=680, avg=164.95, stdev=84.65 00:24:15.265 lat (msec): min=11, max=680, avg=167.36, stdev=85.69 00:24:15.265 clat percentiles (msec): 00:24:15.265 | 1.00th=[ 33], 5.00th=[ 75], 10.00th=[ 81], 20.00th=[ 109], 00:24:15.265 | 30.00th=[ 126], 40.00th=[ 136], 50.00th=[ 153], 60.00th=[ 167], 00:24:15.265 | 70.00th=[ 182], 80.00th=[ 215], 90.00th=[ 249], 95.00th=[ 288], 00:24:15.265 | 99.00th=[ 542], 99.50th=[ 625], 99.90th=[ 676], 99.95th=[ 684], 00:24:15.265 | 99.99th=[ 684] 00:24:15.265 bw ( KiB/s): min=30720, max=178176, per=7.05%, avg=98059.15, stdev=38413.32, samples=20 00:24:15.265 iops : min= 120, max= 696, avg=383.00, stdev=150.00, samples=20 00:24:15.265 lat (msec) : 20=0.33%, 50=1.93%, 100=16.18%, 250=71.78%, 500=8.29% 00:24:15.265 lat (msec) : 750=1.49% 00:24:15.265 cpu : usr=1.21%, sys=1.29%, ctx=1205, majf=0, minf=1 00:24:15.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:15.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.265 issued rwts: total=0,3894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.265 job8: (groupid=0, jobs=1): err= 0: pid=1960986: Mon Jul 15 09:57:30 2024 00:24:15.265 write: IOPS=488, BW=122MiB/s (128MB/s)(1246MiB/10195msec); 0 zone resets 00:24:15.265 slat (usec): min=17, max=86128, avg=1310.53, stdev=4358.79 00:24:15.265 clat (usec): min=1176, max=486861, avg=129444.88, stdev=87747.07 00:24:15.265 lat (usec): min=1215, max=486890, avg=130755.41, stdev=88675.02 00:24:15.265 clat percentiles (msec): 00:24:15.265 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 14], 20.00th=[ 36], 00:24:15.265 | 30.00th=[ 59], 40.00th=[ 95], 50.00th=[ 144], 60.00th=[ 163], 00:24:15.265 | 70.00th=[ 184], 80.00th=[ 205], 90.00th=[ 236], 95.00th=[ 264], 00:24:15.265 | 99.00th=[ 355], 99.50th=[ 405], 99.90th=[ 468], 99.95th=[ 477], 00:24:15.265 | 99.99th=[ 489] 00:24:15.265 bw ( KiB/s): min=67072, max=208384, per=9.06%, avg=125986.60, stdev=45902.68, samples=20 00:24:15.265 iops : min= 262, max= 814, avg=492.10, stdev=179.34, samples=20 00:24:15.265 lat (msec) : 2=0.62%, 4=2.27%, 10=5.60%, 20=4.43%, 50=14.72% 00:24:15.265 lat (msec) : 100=13.00%, 250=51.61%, 500=7.74% 00:24:15.265 cpu : usr=1.35%, sys=1.82%, ctx=3185, majf=0, minf=1 00:24:15.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:15.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.265 issued rwts: total=0,4985,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.265 job9: (groupid=0, jobs=1): err= 0: pid=1960987: Mon Jul 15 09:57:30 2024 00:24:15.265 write: IOPS=464, BW=116MiB/s (122MB/s)(1174MiB/10113msec); 0 zone resets 00:24:15.265 slat (usec): min=17, max=154733, avg=1818.98, stdev=5280.65 00:24:15.265 clat (usec): min=1006, max=323057, avg=135894.23, stdev=75470.84 00:24:15.265 lat (usec): min=1043, max=323096, avg=137713.21, stdev=76385.03 00:24:15.265 clat percentiles (msec): 00:24:15.265 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 31], 20.00th=[ 75], 00:24:15.265 | 30.00th=[ 79], 40.00th=[ 100], 50.00th=[ 140], 60.00th=[ 163], 00:24:15.265 | 70.00th=[ 182], 80.00th=[ 205], 90.00th=[ 239], 95.00th=[ 257], 00:24:15.265 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 321], 99.95th=[ 321], 00:24:15.265 | 99.99th=[ 326] 00:24:15.265 bw ( KiB/s): min=57344, max=206848, per=8.53%, avg=118619.10, stdev=49605.73, samples=20 00:24:15.265 iops : min= 224, max= 808, avg=463.35, stdev=193.77, samples=20 00:24:15.265 lat (msec) : 2=0.32%, 4=1.15%, 10=3.15%, 20=3.85%, 50=3.79% 00:24:15.265 lat (msec) : 100=28.05%, 250=53.26%, 500=6.43% 00:24:15.265 cpu : usr=1.27%, sys=1.73%, ctx=1989, majf=0, minf=1 00:24:15.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:15.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.265 issued rwts: total=0,4696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.265 job10: (groupid=0, jobs=1): err= 0: pid=1960988: Mon Jul 15 09:57:30 2024 00:24:15.265 write: IOPS=455, BW=114MiB/s (119MB/s)(1160MiB/10191msec); 0 zone resets 00:24:15.265 slat (usec): min=19, max=175011, avg=1598.22, stdev=5021.99 00:24:15.265 clat (msec): min=2, max=608, avg=138.83, stdev=86.02 00:24:15.265 lat (msec): min=2, max=608, avg=140.42, stdev=87.00 00:24:15.265 clat percentiles (msec): 00:24:15.265 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 45], 20.00th=[ 75], 00:24:15.265 | 30.00th=[ 85], 40.00th=[ 114], 50.00th=[ 129], 60.00th=[ 148], 00:24:15.265 | 70.00th=[ 174], 80.00th=[ 192], 90.00th=[ 232], 95.00th=[ 271], 00:24:15.265 | 99.00th=[ 502], 99.50th=[ 535], 99.90th=[ 575], 99.95th=[ 609], 00:24:15.265 | 99.99th=[ 609] 00:24:15.265 bw ( KiB/s): min=73216, max=195584, per=8.42%, avg=117184.25, stdev=37606.82, samples=20 00:24:15.265 iops : min= 286, max= 764, avg=457.75, stdev=146.90, samples=20 00:24:15.265 lat (msec) : 4=0.06%, 10=2.74%, 20=2.09%, 50=6.12%, 100=24.59% 00:24:15.265 lat (msec) : 250=56.62%, 500=6.77%, 750=1.01% 00:24:15.265 cpu : usr=1.62%, sys=1.68%, ctx=2436, majf=0, minf=1 00:24:15.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:15.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.265 issued rwts: total=0,4640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.265 00:24:15.265 Run status group 0 (all jobs): 00:24:15.265 WRITE: bw=1358MiB/s (1424MB/s), 95.5MiB/s-167MiB/s (100MB/s-175MB/s), io=13.5GiB (14.5GB), run=10067-10203msec 00:24:15.265 00:24:15.265 Disk stats (read/write): 00:24:15.265 nvme0n1: ios=49/8362, merge=0/0, ticks=1450/1213357, in_queue=1214807, util=99.65% 00:24:15.265 nvme10n1: ios=51/13555, merge=0/0, ticks=579/1245390, in_queue=1245969, util=99.87% 00:24:15.265 nvme1n1: ios=49/8571, merge=0/0, ticks=129/1241627, in_queue=1241756, util=98.43% 00:24:15.265 nvme2n1: ios=48/9365, merge=0/0, ticks=874/1236494, in_queue=1237368, util=99.85% 00:24:15.265 nvme3n1: ios=48/9921, merge=0/0, ticks=2923/1182095, in_queue=1185018, util=100.00% 00:24:15.265 nvme4n1: ios=45/11736, merge=0/0, ticks=1790/1227567, in_queue=1229357, util=99.84% 00:24:15.265 nvme5n1: ios=45/11814, merge=0/0, ticks=2609/1215203, in_queue=1217812, util=99.87% 00:24:15.265 nvme6n1: ios=48/7759, merge=0/0, ticks=2239/1228484, in_queue=1230723, util=99.82% 00:24:15.265 nvme7n1: ios=39/9940, merge=0/0, ticks=1098/1240611, in_queue=1241709, util=99.83% 00:24:15.265 nvme8n1: ios=43/9168, merge=0/0, ticks=2026/1191482, in_queue=1193508, util=100.00% 00:24:15.265 nvme9n1: ios=24/9253, merge=0/0, ticks=677/1239550, in_queue=1240227, util=99.94% 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:15.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:15.265 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:15.265 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:15.265 09:57:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:15.265 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:15.265 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:15.266 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.266 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:15.266 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.266 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.266 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:15.524 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:15.524 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:15.524 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:15.781 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:15.781 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:15.781 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:16.040 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.040 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:16.300 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:16.300 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.300 09:57:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:16.558 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.558 rmmod nvme_tcp 00:24:16.558 rmmod nvme_fabrics 00:24:16.558 rmmod nvme_keyring 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1954924 ']' 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1954924 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1954924 ']' 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1954924 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1954924 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1954924' 00:24:16.558 killing process with pid 1954924 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1954924 00:24:16.558 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1954924 00:24:17.126 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:17.126 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:17.126 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:17.126 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.126 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.126 09:57:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.126 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.126 09:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.027 09:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.027 00:24:19.027 real 0m59.743s 00:24:19.027 user 3m19.918s 00:24:19.027 sys 0m24.500s 00:24:19.028 09:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:19.028 09:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.028 ************************************ 00:24:19.028 END TEST nvmf_multiconnection 00:24:19.028 ************************************ 00:24:19.028 09:57:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:19.028 09:57:35 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:19.028 09:57:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:19.028 09:57:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.028 09:57:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:19.284 ************************************ 00:24:19.284 START TEST nvmf_initiator_timeout 00:24:19.284 ************************************ 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:19.284 * Looking for test storage... 00:24:19.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.284 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.285 09:57:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.183 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.183 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.183 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:21.184 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:21.184 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:21.184 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:21.184 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:24:21.184 00:24:21.184 --- 10.0.0.2 ping statistics --- 00:24:21.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.184 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:24:21.184 00:24:21.184 --- 10.0.0.1 ping statistics --- 00:24:21.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.184 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:21.184 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1964292 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1964292 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1964292 ']' 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.443 09:57:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.443 [2024-07-15 09:57:38.039994] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:24:21.443 [2024-07-15 09:57:38.040081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.443 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.443 [2024-07-15 09:57:38.081824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:21.443 [2024-07-15 09:57:38.114397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.443 [2024-07-15 09:57:38.211690] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.443 [2024-07-15 09:57:38.211748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.443 [2024-07-15 09:57:38.211764] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.443 [2024-07-15 09:57:38.211779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.443 [2024-07-15 09:57:38.211791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.443 [2024-07-15 09:57:38.211848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.443 [2024-07-15 09:57:38.211898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.443 [2024-07-15 09:57:38.211999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.443 [2024-07-15 09:57:38.212002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 Malloc0 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 Delay0 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 [2024-07-15 09:57:38.407899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 [2024-07-15 09:57:38.436205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.703 09:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:22.640 09:57:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:22.640 09:57:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:22.640 09:57:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:22.640 09:57:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:22.640 09:57:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:24.539 09:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:24.540 09:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:24.540 09:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:24.540 09:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:24.540 09:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:24.540 09:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:24.540 09:57:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1964597 00:24:24.540 09:57:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:24.540 09:57:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:24.540 [global] 00:24:24.540 thread=1 00:24:24.540 invalidate=1 00:24:24.540 rw=write 00:24:24.540 time_based=1 00:24:24.540 runtime=60 00:24:24.540 ioengine=libaio 00:24:24.540 direct=1 00:24:24.540 bs=4096 00:24:24.540 iodepth=1 00:24:24.540 norandommap=0 00:24:24.540 numjobs=1 00:24:24.540 00:24:24.540 verify_dump=1 00:24:24.540 verify_backlog=512 00:24:24.540 verify_state_save=0 00:24:24.540 do_verify=1 00:24:24.540 verify=crc32c-intel 00:24:24.540 [job0] 00:24:24.540 filename=/dev/nvme0n1 00:24:24.540 Could not set queue depth (nvme0n1) 00:24:24.797 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:24.797 fio-3.35 00:24:24.797 Starting 1 thread 00:24:27.373 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:27.373 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.373 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:27.632 true 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:27.632 true 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:27.632 true 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:27.632 true 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.632 09:57:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.922 true 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.922 true 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.922 true 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.922 true 00:24:30.922 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.923 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:30.923 09:57:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1964597 00:25:27.187 00:25:27.187 job0: (groupid=0, jobs=1): err= 0: pid=1964737: Mon Jul 15 09:58:41 2024 00:25:27.187 read: IOPS=24, BW=98.5KiB/s (101kB/s)(5912KiB/60032msec) 00:25:27.187 slat (usec): min=4, max=12874, avg=22.08, stdev=334.64 00:25:27.187 clat (usec): min=282, max=40945k, avg=40342.71, stdev=1064873.71 00:25:27.187 lat (usec): min=287, max=40945k, avg=40364.79, stdev=1064873.97 00:25:27.187 clat percentiles (usec): 00:25:27.187 | 1.00th=[ 297], 5.00th=[ 314], 10.00th=[ 330], 00:25:27.187 | 20.00th=[ 343], 30.00th=[ 359], 40.00th=[ 375], 00:25:27.187 | 50.00th=[ 379], 60.00th=[ 383], 70.00th=[ 619], 00:25:27.187 | 80.00th=[ 41157], 90.00th=[ 42206], 95.00th=[ 42206], 00:25:27.187 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:25:27.187 | 99.95th=[17112761], 99.99th=[17112761] 00:25:27.187 write: IOPS=25, BW=102KiB/s (105kB/s)(6144KiB/60032msec); 0 zone resets 00:25:27.187 slat (nsec): min=5469, max=36901, avg=8258.69, stdev=3705.53 00:25:27.187 clat (usec): min=195, max=408, avg=227.57, stdev=21.35 00:25:27.187 lat (usec): min=201, max=440, avg=235.83, stdev=23.21 00:25:27.187 clat percentiles (usec): 00:25:27.187 | 1.00th=[ 202], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:25:27.187 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:25:27.187 | 70.00th=[ 231], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 269], 00:25:27.187 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 379], 99.95th=[ 408], 00:25:27.187 | 99.99th=[ 408] 00:25:27.187 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6144.00, stdev=2896.31, samples=2 00:25:27.187 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:25:27.187 lat (usec) : 250=42.77%, 500=42.40%, 750=0.13% 00:25:27.187 lat (msec) : 2=0.03%, 50=14.63%, >=2000=0.03% 00:25:27.187 cpu : usr=0.04%, sys=0.04%, ctx=3015, majf=0, minf=2 00:25:27.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:27.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.187 issued rwts: total=1478,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:27.187 00:25:27.187 Run status group 0 (all jobs): 00:25:27.187 READ: bw=98.5KiB/s (101kB/s), 98.5KiB/s-98.5KiB/s (101kB/s-101kB/s), io=5912KiB (6054kB), run=60032-60032msec 00:25:27.187 WRITE: bw=102KiB/s (105kB/s), 102KiB/s-102KiB/s (105kB/s-105kB/s), io=6144KiB (6291kB), run=60032-60032msec 00:25:27.187 00:25:27.187 Disk stats (read/write): 00:25:27.187 nvme0n1: ios=1573/1536, merge=0/0, ticks=18812/335, in_queue=19147, util=99.96% 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:27.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:27.187 nvmf hotplug test: fio successful as expected 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:27.187 rmmod nvme_tcp 00:25:27.187 rmmod nvme_fabrics 00:25:27.187 rmmod nvme_keyring 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1964292 ']' 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1964292 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1964292 ']' 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1964292 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1964292 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1964292' 00:25:27.187 killing process with pid 1964292 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1964292 00:25:27.187 09:58:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1964292 00:25:27.187 09:58:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:27.187 09:58:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:27.187 09:58:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:27.187 09:58:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.187 09:58:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:27.187 09:58:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.187 09:58:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.187 09:58:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.447 09:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:27.447 00:25:27.447 real 1m8.223s 00:25:27.447 user 4m11.153s 00:25:27.447 sys 0m6.285s 00:25:27.447 09:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:27.447 09:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:27.447 ************************************ 00:25:27.447 END TEST nvmf_initiator_timeout 00:25:27.447 ************************************ 00:25:27.447 09:58:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:27.447 09:58:44 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:25:27.447 09:58:44 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:25:27.447 09:58:44 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:25:27.447 09:58:44 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:25:27.447 09:58:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:29.350 09:58:45 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:29.351 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:29.351 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:29.351 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:29.351 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:25:29.351 09:58:45 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:29.351 09:58:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:29.351 09:58:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.351 09:58:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.351 ************************************ 00:25:29.351 START TEST nvmf_perf_adq 00:25:29.351 ************************************ 00:25:29.351 09:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:29.351 * Looking for test storage... 00:25:29.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:29.351 09:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:31.887 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:31.887 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:31.887 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:31.887 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:31.887 09:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:34.419 09:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:39.737 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:39.738 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:39.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:39.738 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:39.738 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:25:39.738 00:25:39.738 --- 10.0.0.2 ping statistics --- 00:25:39.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.738 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:25:39.738 00:25:39.738 --- 10.0.0.1 ping statistics --- 00:25:39.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.738 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1976279 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1976279 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1976279 ']' 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.738 09:58:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.738 [2024-07-15 09:58:55.855959] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:25:39.739 [2024-07-15 09:58:55.856039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.739 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.739 [2024-07-15 09:58:55.894096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:39.739 [2024-07-15 09:58:55.925052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:39.739 [2024-07-15 09:58:56.015998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.739 [2024-07-15 09:58:56.016062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.739 [2024-07-15 09:58:56.016079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.739 [2024-07-15 09:58:56.016093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.739 [2024-07-15 09:58:56.016106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.739 [2024-07-15 09:58:56.016187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.739 [2024-07-15 09:58:56.016243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.739 [2024-07-15 09:58:56.016358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.739 [2024-07-15 09:58:56.016361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.739 [2024-07-15 09:58:56.247820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.739 Malloc1 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.739 [2024-07-15 09:58:56.301015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1976330 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:25:39.739 09:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:39.739 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.673 09:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:25:41.673 09:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.673 09:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:41.673 09:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.673 09:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:25:41.673 "tick_rate": 2700000000, 00:25:41.673 "poll_groups": [ 00:25:41.673 { 00:25:41.673 "name": "nvmf_tgt_poll_group_000", 00:25:41.673 "admin_qpairs": 1, 00:25:41.673 "io_qpairs": 1, 00:25:41.673 "current_admin_qpairs": 1, 00:25:41.673 "current_io_qpairs": 1, 00:25:41.673 "pending_bdev_io": 0, 00:25:41.673 "completed_nvme_io": 21121, 00:25:41.673 "transports": [ 00:25:41.673 { 00:25:41.673 "trtype": "TCP" 00:25:41.673 } 00:25:41.673 ] 00:25:41.673 }, 00:25:41.673 { 00:25:41.673 "name": "nvmf_tgt_poll_group_001", 00:25:41.673 "admin_qpairs": 0, 00:25:41.673 "io_qpairs": 1, 00:25:41.673 "current_admin_qpairs": 0, 00:25:41.673 "current_io_qpairs": 1, 00:25:41.673 "pending_bdev_io": 0, 00:25:41.673 "completed_nvme_io": 21546, 00:25:41.673 "transports": [ 00:25:41.673 { 00:25:41.673 "trtype": "TCP" 00:25:41.673 } 00:25:41.673 ] 00:25:41.673 }, 00:25:41.673 { 00:25:41.673 "name": "nvmf_tgt_poll_group_002", 00:25:41.673 "admin_qpairs": 0, 00:25:41.673 "io_qpairs": 1, 00:25:41.673 "current_admin_qpairs": 0, 00:25:41.673 "current_io_qpairs": 1, 00:25:41.673 "pending_bdev_io": 0, 00:25:41.673 "completed_nvme_io": 18343, 00:25:41.673 "transports": [ 00:25:41.673 { 00:25:41.673 "trtype": "TCP" 00:25:41.673 } 00:25:41.673 ] 00:25:41.673 }, 00:25:41.673 { 00:25:41.673 "name": "nvmf_tgt_poll_group_003", 00:25:41.673 "admin_qpairs": 0, 00:25:41.673 "io_qpairs": 1, 00:25:41.673 "current_admin_qpairs": 0, 00:25:41.673 "current_io_qpairs": 1, 00:25:41.673 "pending_bdev_io": 0, 00:25:41.673 "completed_nvme_io": 21176, 00:25:41.673 "transports": [ 00:25:41.673 { 00:25:41.673 "trtype": "TCP" 00:25:41.673 } 00:25:41.673 ] 00:25:41.673 } 00:25:41.673 ] 00:25:41.673 }' 00:25:41.673 09:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:41.673 09:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:25:41.673 09:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:25:41.673 09:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:25:41.673 09:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1976330 00:25:49.785 Initializing NVMe Controllers 00:25:49.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:49.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:49.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:49.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:49.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:49.785 Initialization complete. Launching workers. 00:25:49.785 ======================================================== 00:25:49.785 Latency(us) 00:25:49.785 Device Information : IOPS MiB/s Average min max 00:25:49.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10898.99 42.57 5873.95 2445.21 7990.00 00:25:49.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11095.69 43.34 5769.71 2745.44 7312.76 00:25:49.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9446.50 36.90 6775.86 2328.15 11299.63 00:25:49.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10836.99 42.33 5905.76 2322.61 8508.41 00:25:49.785 ======================================================== 00:25:49.785 Total : 42278.18 165.15 6056.27 2322.61 11299.63 00:25:49.785 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.785 rmmod nvme_tcp 00:25:49.785 rmmod nvme_fabrics 00:25:49.785 rmmod nvme_keyring 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1976279 ']' 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1976279 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1976279 ']' 00:25:49.785 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1976279 00:25:49.786 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:25:49.786 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.786 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1976279 00:25:49.786 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:49.786 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:49.786 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1976279' 00:25:49.786 killing process with pid 1976279 00:25:49.786 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1976279 00:25:49.786 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1976279 00:25:50.045 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.045 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.045 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.045 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.045 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.045 09:59:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.045 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.045 09:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.576 09:59:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.576 09:59:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:25:52.576 09:59:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:52.835 09:59:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:54.742 09:59:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:00.020 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:00.021 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:00.021 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:00.021 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:00.021 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:00.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:26:00.021 00:26:00.021 --- 10.0.0.2 ping statistics --- 00:26:00.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.021 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:26:00.021 00:26:00.021 --- 10.0.0.1 ping statistics --- 00:26:00.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.021 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:00.021 net.core.busy_poll = 1 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:00.021 net.core.busy_read = 1 00:26:00.021 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1978961 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1978961 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1978961 ']' 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.022 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.022 [2024-07-15 09:59:16.684847] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:00.022 [2024-07-15 09:59:16.684959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.022 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.022 [2024-07-15 09:59:16.722307] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:00.022 [2024-07-15 09:59:16.747755] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:00.280 [2024-07-15 09:59:16.833313] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.280 [2024-07-15 09:59:16.833363] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.280 [2024-07-15 09:59:16.833391] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.280 [2024-07-15 09:59:16.833402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.280 [2024-07-15 09:59:16.833412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.280 [2024-07-15 09:59:16.833547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.280 [2024-07-15 09:59:16.833613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.280 [2024-07-15 09:59:16.833679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.280 [2024-07-15 09:59:16.833681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.280 09:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.280 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.280 09:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:00.280 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.280 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.280 [2024-07-15 09:59:17.058351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.538 Malloc1 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:00.538 [2024-07-15 09:59:17.109033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1978986 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:00.538 09:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:00.538 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.469 09:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:02.469 09:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.469 09:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.469 09:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.469 09:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:02.469 "tick_rate": 2700000000, 00:26:02.469 "poll_groups": [ 00:26:02.469 { 00:26:02.469 "name": "nvmf_tgt_poll_group_000", 00:26:02.469 "admin_qpairs": 1, 00:26:02.469 "io_qpairs": 2, 00:26:02.469 "current_admin_qpairs": 1, 00:26:02.469 "current_io_qpairs": 2, 00:26:02.469 "pending_bdev_io": 0, 00:26:02.469 "completed_nvme_io": 26818, 00:26:02.469 "transports": [ 00:26:02.469 { 00:26:02.469 "trtype": "TCP" 00:26:02.469 } 00:26:02.469 ] 00:26:02.469 }, 00:26:02.469 { 00:26:02.469 "name": "nvmf_tgt_poll_group_001", 00:26:02.469 "admin_qpairs": 0, 00:26:02.469 "io_qpairs": 2, 00:26:02.469 "current_admin_qpairs": 0, 00:26:02.469 "current_io_qpairs": 2, 00:26:02.469 "pending_bdev_io": 0, 00:26:02.469 "completed_nvme_io": 23978, 00:26:02.469 "transports": [ 00:26:02.469 { 00:26:02.469 "trtype": "TCP" 00:26:02.469 } 00:26:02.469 ] 00:26:02.469 }, 00:26:02.469 { 00:26:02.469 "name": "nvmf_tgt_poll_group_002", 00:26:02.469 "admin_qpairs": 0, 00:26:02.469 "io_qpairs": 0, 00:26:02.469 "current_admin_qpairs": 0, 00:26:02.469 "current_io_qpairs": 0, 00:26:02.469 "pending_bdev_io": 0, 00:26:02.469 "completed_nvme_io": 0, 00:26:02.469 "transports": [ 00:26:02.469 { 00:26:02.469 "trtype": "TCP" 00:26:02.469 } 00:26:02.469 ] 00:26:02.469 }, 00:26:02.469 { 00:26:02.469 "name": "nvmf_tgt_poll_group_003", 00:26:02.469 "admin_qpairs": 0, 00:26:02.469 "io_qpairs": 0, 00:26:02.469 "current_admin_qpairs": 0, 00:26:02.469 "current_io_qpairs": 0, 00:26:02.469 "pending_bdev_io": 0, 00:26:02.469 "completed_nvme_io": 0, 00:26:02.469 "transports": [ 00:26:02.469 { 00:26:02.469 "trtype": "TCP" 00:26:02.469 } 00:26:02.469 ] 00:26:02.469 } 00:26:02.469 ] 00:26:02.469 }' 00:26:02.469 09:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:02.469 09:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:02.469 09:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:02.469 09:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:02.469 09:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1978986 00:26:10.573 Initializing NVMe Controllers 00:26:10.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:10.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:10.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:10.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:10.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:10.573 Initialization complete. Launching workers. 00:26:10.573 ======================================================== 00:26:10.573 Latency(us) 00:26:10.573 Device Information : IOPS MiB/s Average min max 00:26:10.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5819.90 22.73 10998.32 2404.12 54100.78 00:26:10.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7673.60 29.97 8366.22 1417.29 54590.82 00:26:10.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6984.20 27.28 9192.14 1230.44 54946.52 00:26:10.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5704.20 22.28 11234.03 1539.46 57404.81 00:26:10.573 ======================================================== 00:26:10.573 Total : 26181.89 102.27 9796.42 1230.44 57404.81 00:26:10.573 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:10.573 rmmod nvme_tcp 00:26:10.573 rmmod nvme_fabrics 00:26:10.573 rmmod nvme_keyring 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1978961 ']' 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1978961 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1978961 ']' 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1978961 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1978961 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1978961' 00:26:10.573 killing process with pid 1978961 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1978961 00:26:10.573 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1978961 00:26:10.833 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:10.833 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:10.833 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:10.833 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:10.833 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:10.833 09:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.833 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.833 09:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.121 09:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:14.121 09:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:14.121 00:26:14.121 real 0m44.690s 00:26:14.121 user 2m34.941s 00:26:14.121 sys 0m11.213s 00:26:14.121 09:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:14.121 09:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:14.121 ************************************ 00:26:14.121 END TEST nvmf_perf_adq 00:26:14.121 ************************************ 00:26:14.121 09:59:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:14.121 09:59:30 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:14.121 09:59:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:14.121 09:59:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.121 09:59:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:14.121 ************************************ 00:26:14.121 START TEST nvmf_shutdown 00:26:14.121 ************************************ 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:14.121 * Looking for test storage... 00:26:14.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.121 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:14.122 ************************************ 00:26:14.122 START TEST nvmf_shutdown_tc1 00:26:14.122 ************************************ 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:14.122 09:59:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:16.034 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:16.034 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:16.034 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:16.035 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:16.035 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:16.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:26:16.035 00:26:16.035 --- 10.0.0.2 ping statistics --- 00:26:16.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.035 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:26:16.035 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:26:16.293 00:26:16.293 --- 10.0.0.1 ping statistics --- 00:26:16.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.293 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:16.293 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1982277 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1982277 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1982277 ']' 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:16.294 09:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.294 [2024-07-15 09:59:32.896971] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:16.294 [2024-07-15 09:59:32.897048] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.294 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.294 [2024-07-15 09:59:32.935751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:16.294 [2024-07-15 09:59:32.968140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:16.294 [2024-07-15 09:59:33.058773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.294 [2024-07-15 09:59:33.058838] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.294 [2024-07-15 09:59:33.058854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.294 [2024-07-15 09:59:33.058867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.294 [2024-07-15 09:59:33.058886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.294 [2024-07-15 09:59:33.058991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.294 [2024-07-15 09:59:33.059101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:16.294 [2024-07-15 09:59:33.059154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:16.294 [2024-07-15 09:59:33.059156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.552 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.552 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:26:16.552 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:16.552 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.553 [2024-07-15 09:59:33.221696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.553 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.553 Malloc1 00:26:16.553 [2024-07-15 09:59:33.311128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.810 Malloc2 00:26:16.810 Malloc3 00:26:16.810 Malloc4 00:26:16.810 Malloc5 00:26:16.810 Malloc6 00:26:16.810 Malloc7 00:26:17.068 Malloc8 00:26:17.068 Malloc9 00:26:17.068 Malloc10 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1982455 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1982455 /var/tmp/bdevperf.sock 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1982455 ']' 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:17.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.068 { 00:26:17.068 "params": { 00:26:17.068 "name": "Nvme$subsystem", 00:26:17.068 "trtype": "$TEST_TRANSPORT", 00:26:17.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.068 "adrfam": "ipv4", 00:26:17.068 "trsvcid": "$NVMF_PORT", 00:26:17.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.068 "hdgst": ${hdgst:-false}, 00:26:17.068 "ddgst": ${ddgst:-false} 00:26:17.068 }, 00:26:17.068 "method": "bdev_nvme_attach_controller" 00:26:17.068 } 00:26:17.068 EOF 00:26:17.068 )") 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.068 { 00:26:17.068 "params": { 00:26:17.068 "name": "Nvme$subsystem", 00:26:17.068 "trtype": "$TEST_TRANSPORT", 00:26:17.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.068 "adrfam": "ipv4", 00:26:17.068 "trsvcid": "$NVMF_PORT", 00:26:17.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.068 "hdgst": ${hdgst:-false}, 00:26:17.068 "ddgst": ${ddgst:-false} 00:26:17.068 }, 00:26:17.068 "method": "bdev_nvme_attach_controller" 00:26:17.068 } 00:26:17.068 EOF 00:26:17.068 )") 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.068 { 00:26:17.068 "params": { 00:26:17.068 "name": "Nvme$subsystem", 00:26:17.068 "trtype": "$TEST_TRANSPORT", 00:26:17.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.068 "adrfam": "ipv4", 00:26:17.068 "trsvcid": "$NVMF_PORT", 00:26:17.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.068 "hdgst": ${hdgst:-false}, 00:26:17.068 "ddgst": ${ddgst:-false} 00:26:17.068 }, 00:26:17.068 "method": "bdev_nvme_attach_controller" 00:26:17.068 } 00:26:17.068 EOF 00:26:17.068 )") 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.068 { 00:26:17.068 "params": { 00:26:17.068 "name": "Nvme$subsystem", 00:26:17.068 "trtype": "$TEST_TRANSPORT", 00:26:17.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.068 "adrfam": "ipv4", 00:26:17.068 "trsvcid": "$NVMF_PORT", 00:26:17.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.068 "hdgst": ${hdgst:-false}, 00:26:17.068 "ddgst": ${ddgst:-false} 00:26:17.068 }, 00:26:17.068 "method": "bdev_nvme_attach_controller" 00:26:17.068 } 00:26:17.068 EOF 00:26:17.068 )") 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.068 { 00:26:17.068 "params": { 00:26:17.068 "name": "Nvme$subsystem", 00:26:17.068 "trtype": "$TEST_TRANSPORT", 00:26:17.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.068 "adrfam": "ipv4", 00:26:17.068 "trsvcid": "$NVMF_PORT", 00:26:17.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.068 "hdgst": ${hdgst:-false}, 00:26:17.068 "ddgst": ${ddgst:-false} 00:26:17.068 }, 00:26:17.068 "method": "bdev_nvme_attach_controller" 00:26:17.068 } 00:26:17.068 EOF 00:26:17.068 )") 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.068 { 00:26:17.068 "params": { 00:26:17.068 "name": "Nvme$subsystem", 00:26:17.068 "trtype": "$TEST_TRANSPORT", 00:26:17.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.068 "adrfam": "ipv4", 00:26:17.068 "trsvcid": "$NVMF_PORT", 00:26:17.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.068 "hdgst": ${hdgst:-false}, 00:26:17.068 "ddgst": ${ddgst:-false} 00:26:17.068 }, 00:26:17.068 "method": "bdev_nvme_attach_controller" 00:26:17.068 } 00:26:17.068 EOF 00:26:17.068 )") 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.068 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.068 { 00:26:17.068 "params": { 00:26:17.068 "name": "Nvme$subsystem", 00:26:17.068 "trtype": "$TEST_TRANSPORT", 00:26:17.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.068 "adrfam": "ipv4", 00:26:17.068 "trsvcid": "$NVMF_PORT", 00:26:17.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.068 "hdgst": ${hdgst:-false}, 00:26:17.068 "ddgst": ${ddgst:-false} 00:26:17.068 }, 00:26:17.068 "method": "bdev_nvme_attach_controller" 00:26:17.068 } 00:26:17.068 EOF 00:26:17.068 )") 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.069 { 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme$subsystem", 00:26:17.069 "trtype": "$TEST_TRANSPORT", 00:26:17.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "$NVMF_PORT", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.069 "hdgst": ${hdgst:-false}, 00:26:17.069 "ddgst": ${ddgst:-false} 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 } 00:26:17.069 EOF 00:26:17.069 )") 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.069 { 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme$subsystem", 00:26:17.069 "trtype": "$TEST_TRANSPORT", 00:26:17.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "$NVMF_PORT", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.069 "hdgst": ${hdgst:-false}, 00:26:17.069 "ddgst": ${ddgst:-false} 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 } 00:26:17.069 EOF 00:26:17.069 )") 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.069 { 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme$subsystem", 00:26:17.069 "trtype": "$TEST_TRANSPORT", 00:26:17.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "$NVMF_PORT", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.069 "hdgst": ${hdgst:-false}, 00:26:17.069 "ddgst": ${ddgst:-false} 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 } 00:26:17.069 EOF 00:26:17.069 )") 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:17.069 09:59:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme1", 00:26:17.069 "trtype": "tcp", 00:26:17.069 "traddr": "10.0.0.2", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "4420", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.069 "hdgst": false, 00:26:17.069 "ddgst": false 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 },{ 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme2", 00:26:17.069 "trtype": "tcp", 00:26:17.069 "traddr": "10.0.0.2", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "4420", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:17.069 "hdgst": false, 00:26:17.069 "ddgst": false 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 },{ 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme3", 00:26:17.069 "trtype": "tcp", 00:26:17.069 "traddr": "10.0.0.2", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "4420", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:17.069 "hdgst": false, 00:26:17.069 "ddgst": false 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 },{ 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme4", 00:26:17.069 "trtype": "tcp", 00:26:17.069 "traddr": "10.0.0.2", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "4420", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:17.069 "hdgst": false, 00:26:17.069 "ddgst": false 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 },{ 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme5", 00:26:17.069 "trtype": "tcp", 00:26:17.069 "traddr": "10.0.0.2", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "4420", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:17.069 "hdgst": false, 00:26:17.069 "ddgst": false 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 },{ 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme6", 00:26:17.069 "trtype": "tcp", 00:26:17.069 "traddr": "10.0.0.2", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "4420", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:17.069 "hdgst": false, 00:26:17.069 "ddgst": false 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 },{ 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme7", 00:26:17.069 "trtype": "tcp", 00:26:17.069 "traddr": "10.0.0.2", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "4420", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:17.069 "hdgst": false, 00:26:17.069 "ddgst": false 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 },{ 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme8", 00:26:17.069 "trtype": "tcp", 00:26:17.069 "traddr": "10.0.0.2", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "4420", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:17.069 "hdgst": false, 00:26:17.069 "ddgst": false 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 },{ 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme9", 00:26:17.069 "trtype": "tcp", 00:26:17.069 "traddr": "10.0.0.2", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "4420", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:17.069 "hdgst": false, 00:26:17.069 "ddgst": false 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 },{ 00:26:17.069 "params": { 00:26:17.069 "name": "Nvme10", 00:26:17.069 "trtype": "tcp", 00:26:17.069 "traddr": "10.0.0.2", 00:26:17.069 "adrfam": "ipv4", 00:26:17.069 "trsvcid": "4420", 00:26:17.069 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:17.069 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:17.069 "hdgst": false, 00:26:17.069 "ddgst": false 00:26:17.069 }, 00:26:17.069 "method": "bdev_nvme_attach_controller" 00:26:17.069 }' 00:26:17.069 [2024-07-15 09:59:33.827809] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:17.069 [2024-07-15 09:59:33.827917] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:17.326 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.326 [2024-07-15 09:59:33.864526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:17.326 [2024-07-15 09:59:33.894076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.326 [2024-07-15 09:59:33.980473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.223 09:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:19.223 09:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:26:19.223 09:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:19.223 09:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.223 09:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:19.223 09:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.223 09:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1982455 00:26:19.223 09:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:19.223 09:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:20.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1982455 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1982277 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.157 { 00:26:20.157 "params": { 00:26:20.157 "name": "Nvme$subsystem", 00:26:20.157 "trtype": "$TEST_TRANSPORT", 00:26:20.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.157 "adrfam": "ipv4", 00:26:20.157 "trsvcid": "$NVMF_PORT", 00:26:20.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.157 "hdgst": ${hdgst:-false}, 00:26:20.157 "ddgst": ${ddgst:-false} 00:26:20.157 }, 00:26:20.157 "method": "bdev_nvme_attach_controller" 00:26:20.157 } 00:26:20.157 EOF 00:26:20.157 )") 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.157 { 00:26:20.157 "params": { 00:26:20.157 "name": "Nvme$subsystem", 00:26:20.157 "trtype": "$TEST_TRANSPORT", 00:26:20.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.157 "adrfam": "ipv4", 00:26:20.157 "trsvcid": "$NVMF_PORT", 00:26:20.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.157 "hdgst": ${hdgst:-false}, 00:26:20.157 "ddgst": ${ddgst:-false} 00:26:20.157 }, 00:26:20.157 "method": "bdev_nvme_attach_controller" 00:26:20.157 } 00:26:20.157 EOF 00:26:20.157 )") 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.157 { 00:26:20.157 "params": { 00:26:20.157 "name": "Nvme$subsystem", 00:26:20.157 "trtype": "$TEST_TRANSPORT", 00:26:20.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.157 "adrfam": "ipv4", 00:26:20.157 "trsvcid": "$NVMF_PORT", 00:26:20.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.157 "hdgst": ${hdgst:-false}, 00:26:20.157 "ddgst": ${ddgst:-false} 00:26:20.157 }, 00:26:20.157 "method": "bdev_nvme_attach_controller" 00:26:20.157 } 00:26:20.157 EOF 00:26:20.157 )") 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.157 { 00:26:20.157 "params": { 00:26:20.157 "name": "Nvme$subsystem", 00:26:20.157 "trtype": "$TEST_TRANSPORT", 00:26:20.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.157 "adrfam": "ipv4", 00:26:20.157 "trsvcid": "$NVMF_PORT", 00:26:20.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.157 "hdgst": ${hdgst:-false}, 00:26:20.157 "ddgst": ${ddgst:-false} 00:26:20.157 }, 00:26:20.157 "method": "bdev_nvme_attach_controller" 00:26:20.157 } 00:26:20.157 EOF 00:26:20.157 )") 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.157 { 00:26:20.157 "params": { 00:26:20.157 "name": "Nvme$subsystem", 00:26:20.157 "trtype": "$TEST_TRANSPORT", 00:26:20.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.157 "adrfam": "ipv4", 00:26:20.157 "trsvcid": "$NVMF_PORT", 00:26:20.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.157 "hdgst": ${hdgst:-false}, 00:26:20.157 "ddgst": ${ddgst:-false} 00:26:20.157 }, 00:26:20.157 "method": "bdev_nvme_attach_controller" 00:26:20.157 } 00:26:20.157 EOF 00:26:20.157 )") 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.157 { 00:26:20.157 "params": { 00:26:20.157 "name": "Nvme$subsystem", 00:26:20.157 "trtype": "$TEST_TRANSPORT", 00:26:20.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.157 "adrfam": "ipv4", 00:26:20.157 "trsvcid": "$NVMF_PORT", 00:26:20.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.157 "hdgst": ${hdgst:-false}, 00:26:20.157 "ddgst": ${ddgst:-false} 00:26:20.157 }, 00:26:20.157 "method": "bdev_nvme_attach_controller" 00:26:20.157 } 00:26:20.157 EOF 00:26:20.157 )") 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.157 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.157 { 00:26:20.157 "params": { 00:26:20.157 "name": "Nvme$subsystem", 00:26:20.157 "trtype": "$TEST_TRANSPORT", 00:26:20.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.157 "adrfam": "ipv4", 00:26:20.157 "trsvcid": "$NVMF_PORT", 00:26:20.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.157 "hdgst": ${hdgst:-false}, 00:26:20.157 "ddgst": ${ddgst:-false} 00:26:20.157 }, 00:26:20.157 "method": "bdev_nvme_attach_controller" 00:26:20.157 } 00:26:20.157 EOF 00:26:20.158 )") 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.158 { 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme$subsystem", 00:26:20.158 "trtype": "$TEST_TRANSPORT", 00:26:20.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "$NVMF_PORT", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.158 "hdgst": ${hdgst:-false}, 00:26:20.158 "ddgst": ${ddgst:-false} 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 } 00:26:20.158 EOF 00:26:20.158 )") 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.158 { 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme$subsystem", 00:26:20.158 "trtype": "$TEST_TRANSPORT", 00:26:20.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "$NVMF_PORT", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.158 "hdgst": ${hdgst:-false}, 00:26:20.158 "ddgst": ${ddgst:-false} 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 } 00:26:20.158 EOF 00:26:20.158 )") 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.158 { 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme$subsystem", 00:26:20.158 "trtype": "$TEST_TRANSPORT", 00:26:20.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "$NVMF_PORT", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.158 "hdgst": ${hdgst:-false}, 00:26:20.158 "ddgst": ${ddgst:-false} 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 } 00:26:20.158 EOF 00:26:20.158 )") 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:20.158 09:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme1", 00:26:20.158 "trtype": "tcp", 00:26:20.158 "traddr": "10.0.0.2", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "4420", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:20.158 "hdgst": false, 00:26:20.158 "ddgst": false 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 },{ 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme2", 00:26:20.158 "trtype": "tcp", 00:26:20.158 "traddr": "10.0.0.2", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "4420", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:20.158 "hdgst": false, 00:26:20.158 "ddgst": false 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 },{ 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme3", 00:26:20.158 "trtype": "tcp", 00:26:20.158 "traddr": "10.0.0.2", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "4420", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:20.158 "hdgst": false, 00:26:20.158 "ddgst": false 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 },{ 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme4", 00:26:20.158 "trtype": "tcp", 00:26:20.158 "traddr": "10.0.0.2", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "4420", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:20.158 "hdgst": false, 00:26:20.158 "ddgst": false 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 },{ 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme5", 00:26:20.158 "trtype": "tcp", 00:26:20.158 "traddr": "10.0.0.2", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "4420", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:20.158 "hdgst": false, 00:26:20.158 "ddgst": false 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 },{ 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme6", 00:26:20.158 "trtype": "tcp", 00:26:20.158 "traddr": "10.0.0.2", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "4420", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:20.158 "hdgst": false, 00:26:20.158 "ddgst": false 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 },{ 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme7", 00:26:20.158 "trtype": "tcp", 00:26:20.158 "traddr": "10.0.0.2", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "4420", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:20.158 "hdgst": false, 00:26:20.158 "ddgst": false 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 },{ 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme8", 00:26:20.158 "trtype": "tcp", 00:26:20.158 "traddr": "10.0.0.2", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "4420", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:20.158 "hdgst": false, 00:26:20.158 "ddgst": false 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 },{ 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme9", 00:26:20.158 "trtype": "tcp", 00:26:20.158 "traddr": "10.0.0.2", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "4420", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:20.158 "hdgst": false, 00:26:20.158 "ddgst": false 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 },{ 00:26:20.158 "params": { 00:26:20.158 "name": "Nvme10", 00:26:20.158 "trtype": "tcp", 00:26:20.158 "traddr": "10.0.0.2", 00:26:20.158 "adrfam": "ipv4", 00:26:20.158 "trsvcid": "4420", 00:26:20.158 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:20.158 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:20.158 "hdgst": false, 00:26:20.158 "ddgst": false 00:26:20.158 }, 00:26:20.158 "method": "bdev_nvme_attach_controller" 00:26:20.158 }' 00:26:20.158 [2024-07-15 09:59:36.878980] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:20.158 [2024-07-15 09:59:36.879067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1982804 ] 00:26:20.158 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.158 [2024-07-15 09:59:36.916765] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:20.416 [2024-07-15 09:59:36.946060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.416 [2024-07-15 09:59:37.036607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.789 Running I/O for 1 seconds... 00:26:23.166 00:26:23.166 Latency(us) 00:26:23.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.166 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.166 Verification LBA range: start 0x0 length 0x400 00:26:23.166 Nvme1n1 : 1.12 228.09 14.26 0.00 0.00 277849.88 21748.24 265639.25 00:26:23.166 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.166 Verification LBA range: start 0x0 length 0x400 00:26:23.166 Nvme2n1 : 1.16 221.27 13.83 0.00 0.00 281870.41 18932.62 265639.25 00:26:23.166 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.166 Verification LBA range: start 0x0 length 0x400 00:26:23.166 Nvme3n1 : 1.11 231.20 14.45 0.00 0.00 264953.55 20194.80 260978.92 00:26:23.166 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.166 Verification LBA range: start 0x0 length 0x400 00:26:23.166 Nvme4n1 : 1.10 233.01 14.56 0.00 0.00 258143.76 18447.17 253211.69 00:26:23.166 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.166 Verification LBA range: start 0x0 length 0x400 00:26:23.166 Nvme5n1 : 1.11 230.34 14.40 0.00 0.00 256665.03 18350.08 259425.47 00:26:23.166 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.166 Verification LBA range: start 0x0 length 0x400 00:26:23.166 Nvme6n1 : 1.14 227.50 14.22 0.00 0.00 250303.46 19418.07 256318.58 00:26:23.166 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.166 Verification LBA range: start 0x0 length 0x400 00:26:23.166 Nvme7n1 : 1.15 223.54 13.97 0.00 0.00 255667.39 30292.20 245444.46 00:26:23.166 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.166 Verification LBA range: start 0x0 length 0x400 00:26:23.166 Nvme8n1 : 1.17 272.57 17.04 0.00 0.00 206679.61 11408.12 259425.47 00:26:23.166 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.166 Verification LBA range: start 0x0 length 0x400 00:26:23.166 Nvme9n1 : 1.16 224.69 14.04 0.00 0.00 246348.84 3021.94 265639.25 00:26:23.166 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:23.166 Verification LBA range: start 0x0 length 0x400 00:26:23.166 Nvme10n1 : 1.17 219.46 13.72 0.00 0.00 248344.27 20583.16 290494.39 00:26:23.166 =================================================================================================================== 00:26:23.166 Total : 2311.67 144.48 0.00 0.00 253493.34 3021.94 290494.39 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.166 rmmod nvme_tcp 00:26:23.166 rmmod nvme_fabrics 00:26:23.166 rmmod nvme_keyring 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1982277 ']' 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1982277 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1982277 ']' 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1982277 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.166 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1982277 00:26:23.460 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:23.460 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:23.460 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1982277' 00:26:23.460 killing process with pid 1982277 00:26:23.460 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1982277 00:26:23.460 09:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1982277 00:26:23.719 09:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.719 09:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.719 09:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.719 09:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.719 09:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.719 09:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.719 09:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.719 09:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:26.250 00:26:26.250 real 0m11.681s 00:26:26.250 user 0m34.089s 00:26:26.250 sys 0m3.144s 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.250 ************************************ 00:26:26.250 END TEST nvmf_shutdown_tc1 00:26:26.250 ************************************ 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:26.250 ************************************ 00:26:26.250 START TEST nvmf_shutdown_tc2 00:26:26.250 ************************************ 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:26.250 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:26.250 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:26.250 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:26.250 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.250 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:26.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:26:26.251 00:26:26.251 --- 10.0.0.2 ping statistics --- 00:26:26.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.251 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:26:26.251 00:26:26.251 --- 10.0.0.1 ping statistics --- 00:26:26.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.251 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1983632 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1983632 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1983632 ']' 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.251 09:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.251 [2024-07-15 09:59:42.746187] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:26.251 [2024-07-15 09:59:42.746287] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.251 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.251 [2024-07-15 09:59:42.786219] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:26.251 [2024-07-15 09:59:42.812514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.251 [2024-07-15 09:59:42.902438] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.251 [2024-07-15 09:59:42.902498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.251 [2024-07-15 09:59:42.902527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.251 [2024-07-15 09:59:42.902538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.251 [2024-07-15 09:59:42.902547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.251 [2024-07-15 09:59:42.902631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.251 [2024-07-15 09:59:42.902708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.251 [2024-07-15 09:59:42.902769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:26.251 [2024-07-15 09:59:42.902771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.251 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.251 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:26.251 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:26.251 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:26.251 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.508 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.509 [2024-07-15 09:59:43.049745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.509 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.509 Malloc1 00:26:26.509 [2024-07-15 09:59:43.131343] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.509 Malloc2 00:26:26.509 Malloc3 00:26:26.509 Malloc4 00:26:26.766 Malloc5 00:26:26.766 Malloc6 00:26:26.766 Malloc7 00:26:26.766 Malloc8 00:26:26.766 Malloc9 00:26:26.766 Malloc10 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1983700 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1983700 /var/tmp/bdevperf.sock 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1983700 ']' 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:27.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.024 { 00:26:27.024 "params": { 00:26:27.024 "name": "Nvme$subsystem", 00:26:27.024 "trtype": "$TEST_TRANSPORT", 00:26:27.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.024 "adrfam": "ipv4", 00:26:27.024 "trsvcid": "$NVMF_PORT", 00:26:27.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.024 "hdgst": ${hdgst:-false}, 00:26:27.024 "ddgst": ${ddgst:-false} 00:26:27.024 }, 00:26:27.024 "method": "bdev_nvme_attach_controller" 00:26:27.024 } 00:26:27.024 EOF 00:26:27.024 )") 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.024 { 00:26:27.024 "params": { 00:26:27.024 "name": "Nvme$subsystem", 00:26:27.024 "trtype": "$TEST_TRANSPORT", 00:26:27.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.024 "adrfam": "ipv4", 00:26:27.024 "trsvcid": "$NVMF_PORT", 00:26:27.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.024 "hdgst": ${hdgst:-false}, 00:26:27.024 "ddgst": ${ddgst:-false} 00:26:27.024 }, 00:26:27.024 "method": "bdev_nvme_attach_controller" 00:26:27.024 } 00:26:27.024 EOF 00:26:27.024 )") 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.024 { 00:26:27.024 "params": { 00:26:27.024 "name": "Nvme$subsystem", 00:26:27.024 "trtype": "$TEST_TRANSPORT", 00:26:27.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.024 "adrfam": "ipv4", 00:26:27.024 "trsvcid": "$NVMF_PORT", 00:26:27.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.024 "hdgst": ${hdgst:-false}, 00:26:27.024 "ddgst": ${ddgst:-false} 00:26:27.024 }, 00:26:27.024 "method": "bdev_nvme_attach_controller" 00:26:27.024 } 00:26:27.024 EOF 00:26:27.024 )") 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.024 { 00:26:27.024 "params": { 00:26:27.024 "name": "Nvme$subsystem", 00:26:27.024 "trtype": "$TEST_TRANSPORT", 00:26:27.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.024 "adrfam": "ipv4", 00:26:27.024 "trsvcid": "$NVMF_PORT", 00:26:27.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.024 "hdgst": ${hdgst:-false}, 00:26:27.024 "ddgst": ${ddgst:-false} 00:26:27.024 }, 00:26:27.024 "method": "bdev_nvme_attach_controller" 00:26:27.024 } 00:26:27.024 EOF 00:26:27.024 )") 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.024 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.024 { 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme$subsystem", 00:26:27.025 "trtype": "$TEST_TRANSPORT", 00:26:27.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "$NVMF_PORT", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.025 "hdgst": ${hdgst:-false}, 00:26:27.025 "ddgst": ${ddgst:-false} 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 } 00:26:27.025 EOF 00:26:27.025 )") 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.025 { 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme$subsystem", 00:26:27.025 "trtype": "$TEST_TRANSPORT", 00:26:27.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "$NVMF_PORT", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.025 "hdgst": ${hdgst:-false}, 00:26:27.025 "ddgst": ${ddgst:-false} 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 } 00:26:27.025 EOF 00:26:27.025 )") 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.025 { 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme$subsystem", 00:26:27.025 "trtype": "$TEST_TRANSPORT", 00:26:27.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "$NVMF_PORT", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.025 "hdgst": ${hdgst:-false}, 00:26:27.025 "ddgst": ${ddgst:-false} 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 } 00:26:27.025 EOF 00:26:27.025 )") 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.025 { 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme$subsystem", 00:26:27.025 "trtype": "$TEST_TRANSPORT", 00:26:27.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "$NVMF_PORT", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.025 "hdgst": ${hdgst:-false}, 00:26:27.025 "ddgst": ${ddgst:-false} 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 } 00:26:27.025 EOF 00:26:27.025 )") 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.025 { 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme$subsystem", 00:26:27.025 "trtype": "$TEST_TRANSPORT", 00:26:27.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "$NVMF_PORT", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.025 "hdgst": ${hdgst:-false}, 00:26:27.025 "ddgst": ${ddgst:-false} 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 } 00:26:27.025 EOF 00:26:27.025 )") 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.025 { 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme$subsystem", 00:26:27.025 "trtype": "$TEST_TRANSPORT", 00:26:27.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "$NVMF_PORT", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.025 "hdgst": ${hdgst:-false}, 00:26:27.025 "ddgst": ${ddgst:-false} 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 } 00:26:27.025 EOF 00:26:27.025 )") 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:26:27.025 09:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme1", 00:26:27.025 "trtype": "tcp", 00:26:27.025 "traddr": "10.0.0.2", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "4420", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.025 "hdgst": false, 00:26:27.025 "ddgst": false 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 },{ 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme2", 00:26:27.025 "trtype": "tcp", 00:26:27.025 "traddr": "10.0.0.2", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "4420", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:27.025 "hdgst": false, 00:26:27.025 "ddgst": false 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 },{ 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme3", 00:26:27.025 "trtype": "tcp", 00:26:27.025 "traddr": "10.0.0.2", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "4420", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:27.025 "hdgst": false, 00:26:27.025 "ddgst": false 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 },{ 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme4", 00:26:27.025 "trtype": "tcp", 00:26:27.025 "traddr": "10.0.0.2", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "4420", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:27.025 "hdgst": false, 00:26:27.025 "ddgst": false 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 },{ 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme5", 00:26:27.025 "trtype": "tcp", 00:26:27.025 "traddr": "10.0.0.2", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "4420", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:27.025 "hdgst": false, 00:26:27.025 "ddgst": false 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 },{ 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme6", 00:26:27.025 "trtype": "tcp", 00:26:27.025 "traddr": "10.0.0.2", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "4420", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:27.025 "hdgst": false, 00:26:27.025 "ddgst": false 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 },{ 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme7", 00:26:27.025 "trtype": "tcp", 00:26:27.025 "traddr": "10.0.0.2", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "4420", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:27.025 "hdgst": false, 00:26:27.025 "ddgst": false 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 },{ 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme8", 00:26:27.025 "trtype": "tcp", 00:26:27.025 "traddr": "10.0.0.2", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "4420", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:27.025 "hdgst": false, 00:26:27.025 "ddgst": false 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 },{ 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme9", 00:26:27.025 "trtype": "tcp", 00:26:27.025 "traddr": "10.0.0.2", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "4420", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:27.025 "hdgst": false, 00:26:27.025 "ddgst": false 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 },{ 00:26:27.025 "params": { 00:26:27.025 "name": "Nvme10", 00:26:27.025 "trtype": "tcp", 00:26:27.025 "traddr": "10.0.0.2", 00:26:27.025 "adrfam": "ipv4", 00:26:27.025 "trsvcid": "4420", 00:26:27.025 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:27.025 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:27.025 "hdgst": false, 00:26:27.025 "ddgst": false 00:26:27.025 }, 00:26:27.025 "method": "bdev_nvme_attach_controller" 00:26:27.025 }' 00:26:27.025 [2024-07-15 09:59:43.633686] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:27.026 [2024-07-15 09:59:43.633769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1983700 ] 00:26:27.026 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.026 [2024-07-15 09:59:43.670610] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:27.026 [2024-07-15 09:59:43.700572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.026 [2024-07-15 09:59:43.787723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.921 Running I/O for 10 seconds... 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:28.921 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:29.177 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:29.177 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:29.177 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:29.177 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:29.177 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.177 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.177 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.433 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:29.433 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:29.433 09:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1983700 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1983700 ']' 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1983700 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1983700 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1983700' 00:26:29.689 killing process with pid 1983700 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1983700 00:26:29.689 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1983700 00:26:29.689 Received shutdown signal, test time was about 1.141562 seconds 00:26:29.689 00:26:29.689 Latency(us) 00:26:29.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.690 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.690 Verification LBA range: start 0x0 length 0x400 00:26:29.690 Nvme1n1 : 1.13 226.37 14.15 0.00 0.00 279345.11 21845.33 256318.58 00:26:29.690 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.690 Verification LBA range: start 0x0 length 0x400 00:26:29.690 Nvme2n1 : 1.11 229.96 14.37 0.00 0.00 269223.25 19612.25 251658.24 00:26:29.690 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.690 Verification LBA range: start 0x0 length 0x400 00:26:29.690 Nvme3n1 : 1.11 241.14 15.07 0.00 0.00 246621.72 8446.86 250104.79 00:26:29.690 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.690 Verification LBA range: start 0x0 length 0x400 00:26:29.690 Nvme4n1 : 1.10 232.00 14.50 0.00 0.00 255270.68 18641.35 259425.47 00:26:29.690 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.690 Verification LBA range: start 0x0 length 0x400 00:26:29.690 Nvme5n1 : 1.12 227.59 14.22 0.00 0.00 254994.20 20097.71 257872.02 00:26:29.690 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.690 Verification LBA range: start 0x0 length 0x400 00:26:29.690 Nvme6n1 : 1.14 224.59 14.04 0.00 0.00 252735.53 23884.23 265639.25 00:26:29.690 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.690 Verification LBA range: start 0x0 length 0x400 00:26:29.690 Nvme7n1 : 1.13 225.72 14.11 0.00 0.00 245946.03 18058.81 254765.13 00:26:29.690 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.690 Verification LBA range: start 0x0 length 0x400 00:26:29.690 Nvme8n1 : 1.12 233.05 14.57 0.00 0.00 231649.02 4441.88 250104.79 00:26:29.690 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.690 Verification LBA range: start 0x0 length 0x400 00:26:29.690 Nvme9n1 : 1.14 224.41 14.03 0.00 0.00 235395.98 15728.64 264085.81 00:26:29.690 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.690 Verification LBA range: start 0x0 length 0x400 00:26:29.690 Nvme10n1 : 1.10 175.24 10.95 0.00 0.00 292993.45 22427.88 288940.94 00:26:29.690 =================================================================================================================== 00:26:29.690 Total : 2240.07 140.00 0.00 0.00 255393.43 4441.88 288940.94 00:26:29.947 09:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1983632 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.878 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.878 rmmod nvme_tcp 00:26:31.135 rmmod nvme_fabrics 00:26:31.135 rmmod nvme_keyring 00:26:31.135 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:31.135 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:26:31.135 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:26:31.135 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1983632 ']' 00:26:31.135 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1983632 00:26:31.135 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1983632 ']' 00:26:31.135 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1983632 00:26:31.135 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:26:31.136 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:31.136 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1983632 00:26:31.136 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:31.136 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:31.136 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1983632' 00:26:31.136 killing process with pid 1983632 00:26:31.136 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1983632 00:26:31.136 09:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1983632 00:26:31.701 09:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:31.701 09:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:31.701 09:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:31.701 09:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:31.701 09:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:31.701 09:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.701 09:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.701 09:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.603 00:26:33.603 real 0m7.742s 00:26:33.603 user 0m23.629s 00:26:33.603 sys 0m1.517s 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.603 ************************************ 00:26:33.603 END TEST nvmf_shutdown_tc2 00:26:33.603 ************************************ 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:33.603 ************************************ 00:26:33.603 START TEST nvmf_shutdown_tc3 00:26:33.603 ************************************ 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.603 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:33.604 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:33.604 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:33.604 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:33.604 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.604 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.863 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.863 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.863 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:33.863 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.863 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.863 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.863 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:33.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:26:33.863 00:26:33.863 --- 10.0.0.2 ping statistics --- 00:26:33.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.863 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:33.863 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:26:33.864 00:26:33.864 --- 10.0.0.1 ping statistics --- 00:26:33.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.864 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1984664 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1984664 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1984664 ']' 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:33.864 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:33.864 [2024-07-15 09:59:50.542435] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:33.864 [2024-07-15 09:59:50.542516] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.864 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.864 [2024-07-15 09:59:50.580335] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:33.864 [2024-07-15 09:59:50.612491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:34.122 [2024-07-15 09:59:50.703766] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.122 [2024-07-15 09:59:50.703826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.122 [2024-07-15 09:59:50.703843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.122 [2024-07-15 09:59:50.703858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.122 [2024-07-15 09:59:50.703870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.122 [2024-07-15 09:59:50.703981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.122 [2024-07-15 09:59:50.704080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:34.122 [2024-07-15 09:59:50.704133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.122 [2024-07-15 09:59:50.704131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.122 [2024-07-15 09:59:50.868817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:34.122 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:34.123 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:34.381 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:34.381 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:34.381 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:34.381 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:34.381 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.381 09:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.381 Malloc1 00:26:34.381 [2024-07-15 09:59:50.957267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.381 Malloc2 00:26:34.381 Malloc3 00:26:34.381 Malloc4 00:26:34.381 Malloc5 00:26:34.639 Malloc6 00:26:34.639 Malloc7 00:26:34.639 Malloc8 00:26:34.639 Malloc9 00:26:34.639 Malloc10 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1984785 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1984785 /var/tmp/bdevperf.sock 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1984785 ']' 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:34.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.639 { 00:26:34.639 "params": { 00:26:34.639 "name": "Nvme$subsystem", 00:26:34.639 "trtype": "$TEST_TRANSPORT", 00:26:34.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.639 "adrfam": "ipv4", 00:26:34.639 "trsvcid": "$NVMF_PORT", 00:26:34.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.639 "hdgst": ${hdgst:-false}, 00:26:34.639 "ddgst": ${ddgst:-false} 00:26:34.639 }, 00:26:34.639 "method": "bdev_nvme_attach_controller" 00:26:34.639 } 00:26:34.639 EOF 00:26:34.639 )") 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:34.639 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.640 { 00:26:34.640 "params": { 00:26:34.640 "name": "Nvme$subsystem", 00:26:34.640 "trtype": "$TEST_TRANSPORT", 00:26:34.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.640 "adrfam": "ipv4", 00:26:34.640 "trsvcid": "$NVMF_PORT", 00:26:34.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.640 "hdgst": ${hdgst:-false}, 00:26:34.640 "ddgst": ${ddgst:-false} 00:26:34.640 }, 00:26:34.640 "method": "bdev_nvme_attach_controller" 00:26:34.640 } 00:26:34.640 EOF 00:26:34.640 )") 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.640 { 00:26:34.640 "params": { 00:26:34.640 "name": "Nvme$subsystem", 00:26:34.640 "trtype": "$TEST_TRANSPORT", 00:26:34.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.640 "adrfam": "ipv4", 00:26:34.640 "trsvcid": "$NVMF_PORT", 00:26:34.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.640 "hdgst": ${hdgst:-false}, 00:26:34.640 "ddgst": ${ddgst:-false} 00:26:34.640 }, 00:26:34.640 "method": "bdev_nvme_attach_controller" 00:26:34.640 } 00:26:34.640 EOF 00:26:34.640 )") 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.640 { 00:26:34.640 "params": { 00:26:34.640 "name": "Nvme$subsystem", 00:26:34.640 "trtype": "$TEST_TRANSPORT", 00:26:34.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.640 "adrfam": "ipv4", 00:26:34.640 "trsvcid": "$NVMF_PORT", 00:26:34.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.640 "hdgst": ${hdgst:-false}, 00:26:34.640 "ddgst": ${ddgst:-false} 00:26:34.640 }, 00:26:34.640 "method": "bdev_nvme_attach_controller" 00:26:34.640 } 00:26:34.640 EOF 00:26:34.640 )") 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.640 { 00:26:34.640 "params": { 00:26:34.640 "name": "Nvme$subsystem", 00:26:34.640 "trtype": "$TEST_TRANSPORT", 00:26:34.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.640 "adrfam": "ipv4", 00:26:34.640 "trsvcid": "$NVMF_PORT", 00:26:34.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.640 "hdgst": ${hdgst:-false}, 00:26:34.640 "ddgst": ${ddgst:-false} 00:26:34.640 }, 00:26:34.640 "method": "bdev_nvme_attach_controller" 00:26:34.640 } 00:26:34.640 EOF 00:26:34.640 )") 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.640 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.640 { 00:26:34.640 "params": { 00:26:34.640 "name": "Nvme$subsystem", 00:26:34.640 "trtype": "$TEST_TRANSPORT", 00:26:34.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.640 "adrfam": "ipv4", 00:26:34.640 "trsvcid": "$NVMF_PORT", 00:26:34.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.640 "hdgst": ${hdgst:-false}, 00:26:34.640 "ddgst": ${ddgst:-false} 00:26:34.640 }, 00:26:34.640 "method": "bdev_nvme_attach_controller" 00:26:34.640 } 00:26:34.640 EOF 00:26:34.640 )") 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.898 { 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme$subsystem", 00:26:34.898 "trtype": "$TEST_TRANSPORT", 00:26:34.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "$NVMF_PORT", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.898 "hdgst": ${hdgst:-false}, 00:26:34.898 "ddgst": ${ddgst:-false} 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 } 00:26:34.898 EOF 00:26:34.898 )") 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.898 { 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme$subsystem", 00:26:34.898 "trtype": "$TEST_TRANSPORT", 00:26:34.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "$NVMF_PORT", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.898 "hdgst": ${hdgst:-false}, 00:26:34.898 "ddgst": ${ddgst:-false} 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 } 00:26:34.898 EOF 00:26:34.898 )") 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.898 { 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme$subsystem", 00:26:34.898 "trtype": "$TEST_TRANSPORT", 00:26:34.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "$NVMF_PORT", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.898 "hdgst": ${hdgst:-false}, 00:26:34.898 "ddgst": ${ddgst:-false} 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 } 00:26:34.898 EOF 00:26:34.898 )") 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.898 { 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme$subsystem", 00:26:34.898 "trtype": "$TEST_TRANSPORT", 00:26:34.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "$NVMF_PORT", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.898 "hdgst": ${hdgst:-false}, 00:26:34.898 "ddgst": ${ddgst:-false} 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 } 00:26:34.898 EOF 00:26:34.898 )") 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:26:34.898 09:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme1", 00:26:34.898 "trtype": "tcp", 00:26:34.898 "traddr": "10.0.0.2", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "4420", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:34.898 "hdgst": false, 00:26:34.898 "ddgst": false 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 },{ 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme2", 00:26:34.898 "trtype": "tcp", 00:26:34.898 "traddr": "10.0.0.2", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "4420", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:34.898 "hdgst": false, 00:26:34.898 "ddgst": false 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 },{ 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme3", 00:26:34.898 "trtype": "tcp", 00:26:34.898 "traddr": "10.0.0.2", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "4420", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:34.898 "hdgst": false, 00:26:34.898 "ddgst": false 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 },{ 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme4", 00:26:34.898 "trtype": "tcp", 00:26:34.898 "traddr": "10.0.0.2", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "4420", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:34.898 "hdgst": false, 00:26:34.898 "ddgst": false 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 },{ 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme5", 00:26:34.898 "trtype": "tcp", 00:26:34.898 "traddr": "10.0.0.2", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "4420", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:34.898 "hdgst": false, 00:26:34.898 "ddgst": false 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 },{ 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme6", 00:26:34.898 "trtype": "tcp", 00:26:34.898 "traddr": "10.0.0.2", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "4420", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:34.898 "hdgst": false, 00:26:34.898 "ddgst": false 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 },{ 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme7", 00:26:34.898 "trtype": "tcp", 00:26:34.898 "traddr": "10.0.0.2", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "4420", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:34.898 "hdgst": false, 00:26:34.898 "ddgst": false 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 },{ 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme8", 00:26:34.898 "trtype": "tcp", 00:26:34.898 "traddr": "10.0.0.2", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "4420", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:34.898 "hdgst": false, 00:26:34.898 "ddgst": false 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 },{ 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme9", 00:26:34.898 "trtype": "tcp", 00:26:34.898 "traddr": "10.0.0.2", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "4420", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:34.898 "hdgst": false, 00:26:34.898 "ddgst": false 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 },{ 00:26:34.898 "params": { 00:26:34.898 "name": "Nvme10", 00:26:34.898 "trtype": "tcp", 00:26:34.898 "traddr": "10.0.0.2", 00:26:34.898 "adrfam": "ipv4", 00:26:34.898 "trsvcid": "4420", 00:26:34.898 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:34.898 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:34.898 "hdgst": false, 00:26:34.898 "ddgst": false 00:26:34.898 }, 00:26:34.898 "method": "bdev_nvme_attach_controller" 00:26:34.898 }' 00:26:34.898 [2024-07-15 09:59:51.449931] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:34.898 [2024-07-15 09:59:51.450013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984785 ] 00:26:34.898 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.898 [2024-07-15 09:59:51.485691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:34.898 [2024-07-15 09:59:51.514942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.898 [2024-07-15 09:59:51.602539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.794 Running I/O for 10 seconds... 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:36.794 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1984664 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1984664 ']' 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1984664 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1984664 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1984664' 00:26:37.052 killing process with pid 1984664 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1984664 00:26:37.052 09:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1984664 00:26:37.052 [2024-07-15 09:59:53.832083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.052 [2024-07-15 09:59:53.832374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.832993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.833005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.833018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.833031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.833044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.833056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.833069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.833082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0aa0 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.053 [2024-07-15 09:59:53.835936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c0f40 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.837991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.838422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c13e0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.328 [2024-07-15 09:59:53.839895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.839909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.839931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.839951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.839965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.839978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.839991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c18a0 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.840990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.841510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d40 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.842990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.843002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.843014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.843000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.329 [2024-07-15 09:59:53.843026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.329 [2024-07-15 09:59:53.843039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 09:59:53.843077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 he state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with t[2024-07-15 09:59:53.843091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:26:37.330 id:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with t[2024-07-15 09:59:53.843107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:26:37.330 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116ef10 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2200 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133aa90 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1010 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339e80 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.843932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab740 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.843978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.843998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.844013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.844027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.844041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.844054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.844068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.330 [2024-07-15 09:59:53.844086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.844099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342140 is same with the state(5) to be set 00:26:37.330 [2024-07-15 09:59:53.845223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.845981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.845996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.330 [2024-07-15 09:59:53.846009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.330 [2024-07-15 09:59:53.846025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.846977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.846991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:37.331 [2024-07-15 09:59:53.847250] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12aa7d0 was disconnected and freed. reset controller. 00:26:37.331 [2024-07-15 09:59:53.847723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.847978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.331 [2024-07-15 09:59:53.847991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.331 [2024-07-15 09:59:53.848010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.848977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.848992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.849619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.849649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:37.332 [2024-07-15 09:59:53.849718] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12abc80 was disconnected and freed. reset controller. 00:26:37.332 [2024-07-15 09:59:53.850274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.850297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.850318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.850333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.850348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.850362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.850377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.850391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.850405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.332 [2024-07-15 09:59:53.850419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.332 [2024-07-15 09:59:53.850434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.850972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.850986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.851969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.851985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.852001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.852017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.852031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.852046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.852060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.852075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.852088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.852104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.852117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.852132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.852145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.852161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.852184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.852199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.333 [2024-07-15 09:59:53.852213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.333 [2024-07-15 09:59:53.852242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:37.333 [2024-07-15 09:59:53.852311] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12acf30 was disconnected and freed. reset controller. 00:26:37.333 [2024-07-15 09:59:53.852368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.852972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.852986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.334 [2024-07-15 09:59:53.853981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.334 [2024-07-15 09:59:53.853997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.854310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.854324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ae3f0 is same with the state(5) to be set 00:26:37.335 [2024-07-15 09:59:53.854830] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12ae3f0 was disconnected and freed. reset controller. 00:26:37.335 [2024-07-15 09:59:53.854951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116ef10 (9): Bad file descriptor 00:26:37.335 [2024-07-15 09:59:53.854988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133aa90 (9): Bad file descriptor 00:26:37.335 [2024-07-15 09:59:53.855017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1010 (9): Bad file descriptor 00:26:37.335 [2024-07-15 09:59:53.855085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64610 is same with the state(5) to be set 00:26:37.335 [2024-07-15 09:59:53.855242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339e80 (9): Bad file descriptor 00:26:37.335 [2024-07-15 09:59:53.855287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1338600 is same with the state(5) to be set 00:26:37.335 [2024-07-15 09:59:53.855438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7b40 is same with the state(5) to be set 00:26:37.335 [2024-07-15 09:59:53.855593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.335 [2024-07-15 09:59:53.855702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.855714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339320 is same with the state(5) to be set 00:26:37.335 [2024-07-15 09:59:53.855736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ab740 (9): Bad file descriptor 00:26:37.335 [2024-07-15 09:59:53.855765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1342140 (9): Bad file descriptor 00:26:37.335 [2024-07-15 09:59:53.856140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.856972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.335 [2024-07-15 09:59:53.856987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-15 09:59:53.857001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.857982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.857997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.858016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.858031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.858045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.858060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.858073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.858173] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x116aab0 was disconnected and freed. reset controller. 00:26:37.336 [2024-07-15 09:59:53.864227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:37.336 [2024-07-15 09:59:53.864281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:37.336 [2024-07-15 09:59:53.864303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:37.336 [2024-07-15 09:59:53.864332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1338600 (9): Bad file descriptor 00:26:37.336 [2024-07-15 09:59:53.864356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339320 (9): Bad file descriptor 00:26:37.336 [2024-07-15 09:59:53.864375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7b40 (9): Bad file descriptor 00:26:37.336 [2024-07-15 09:59:53.865754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:37.336 [2024-07-15 09:59:53.865785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:37.336 [2024-07-15 09:59:53.865867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc64610 (9): Bad file descriptor 00:26:37.336 [2024-07-15 09:59:53.865988] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.336 [2024-07-15 09:59:53.866062] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.336 [2024-07-15 09:59:53.866130] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.336 [2024-07-15 09:59:53.866486] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.336 [2024-07-15 09:59:53.866732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.336 [2024-07-15 09:59:53.866768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7b40 with addr=10.0.0.2, port=4420 00:26:37.336 [2024-07-15 09:59:53.866786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7b40 is same with the state(5) to be set 00:26:37.336 [2024-07-15 09:59:53.866988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.336 [2024-07-15 09:59:53.867014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339320 with addr=10.0.0.2, port=4420 00:26:37.336 [2024-07-15 09:59:53.867030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339320 is same with the state(5) to be set 00:26:37.336 [2024-07-15 09:59:53.867143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.336 [2024-07-15 09:59:53.867178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1338600 with addr=10.0.0.2, port=4420 00:26:37.336 [2024-07-15 09:59:53.867192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1338600 is same with the state(5) to be set 00:26:37.336 [2024-07-15 09:59:53.867311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.336 [2024-07-15 09:59:53.867337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1010 with addr=10.0.0.2, port=4420 00:26:37.336 [2024-07-15 09:59:53.867353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1010 is same with the state(5) to be set 00:26:37.336 [2024-07-15 09:59:53.867467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.336 [2024-07-15 09:59:53.867493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339e80 with addr=10.0.0.2, port=4420 00:26:37.336 [2024-07-15 09:59:53.867508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339e80 is same with the state(5) to be set 00:26:37.336 [2024-07-15 09:59:53.867579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.867602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.867628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.867644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.867660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.867674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.867689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.867702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.867717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.867731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.867746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.867759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.336 [2024-07-15 09:59:53.867780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-15 09:59:53.867795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.867810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.867823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.867839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.867852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.867867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.867887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.867904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.867918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.867933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.867946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.867961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.867975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.867989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.868983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.868998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.869012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.869027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.869041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.869056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.869069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.869085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.869099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.869114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.869128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.869144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.869157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.869181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.869194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.337 [2024-07-15 09:59:53.869210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.337 [2024-07-15 09:59:53.869223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.869238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.869252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.869274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.869289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.869304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.869317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.869332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.869346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.869361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.869374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.869389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.869402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.869417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.869431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.869447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.869460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.869476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.869489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.869503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1228940 is same with the state(5) to be set 00:26:37.338 [2024-07-15 09:59:53.870780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.870803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.870823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.870838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.870854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.870868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.870892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.870907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.870938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.870953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.870968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.870981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.870996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.871978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.871993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.338 [2024-07-15 09:59:53.872341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.338 [2024-07-15 09:59:53.872355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.872683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.872696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a7350 is same with the state(5) to be set 00:26:37.339 [2024-07-15 09:59:53.873961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.873984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.874978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.874993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.339 [2024-07-15 09:59:53.875379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.339 [2024-07-15 09:59:53.875392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.875852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.875866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a87c0 is same with the state(5) to be set 00:26:37.340 [2024-07-15 09:59:53.877121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.877985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.877998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.340 [2024-07-15 09:59:53.878338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.340 [2024-07-15 09:59:53.878352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.878979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.878993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.879008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.879023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.879042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.879056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.880370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:37.341 [2024-07-15 09:59:53.880401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:37.341 [2024-07-15 09:59:53.880422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:37.341 [2024-07-15 09:59:53.880440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:37.341 [2024-07-15 09:59:53.880504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7b40 (9): Bad file descriptor 00:26:37.341 [2024-07-15 09:59:53.880528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339320 (9): Bad file descriptor 00:26:37.341 [2024-07-15 09:59:53.880546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1338600 (9): Bad file descriptor 00:26:37.341 [2024-07-15 09:59:53.880564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1010 (9): Bad file descriptor 00:26:37.341 [2024-07-15 09:59:53.880582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339e80 (9): Bad file descriptor 00:26:37.341 [2024-07-15 09:59:53.880645] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.341 [2024-07-15 09:59:53.880669] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.341 [2024-07-15 09:59:53.880687] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.341 [2024-07-15 09:59:53.880705] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.341 [2024-07-15 09:59:53.880721] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.341 [2024-07-15 09:59:53.881016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.341 [2024-07-15 09:59:53.881046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116ef10 with addr=10.0.0.2, port=4420 00:26:37.341 [2024-07-15 09:59:53.881063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116ef10 is same with the state(5) to be set 00:26:37.341 [2024-07-15 09:59:53.881190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.341 [2024-07-15 09:59:53.881215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x133aa90 with addr=10.0.0.2, port=4420 00:26:37.341 [2024-07-15 09:59:53.881230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133aa90 is same with the state(5) to be set 00:26:37.341 [2024-07-15 09:59:53.881346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.341 [2024-07-15 09:59:53.881371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1342140 with addr=10.0.0.2, port=4420 00:26:37.341 [2024-07-15 09:59:53.881386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342140 is same with the state(5) to be set 00:26:37.341 [2024-07-15 09:59:53.881493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.341 [2024-07-15 09:59:53.881517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ab740 with addr=10.0.0.2, port=4420 00:26:37.341 [2024-07-15 09:59:53.881533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab740 is same with the state(5) to be set 00:26:37.341 [2024-07-15 09:59:53.881547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:37.341 [2024-07-15 09:59:53.881565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:37.341 [2024-07-15 09:59:53.881580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:37.341 [2024-07-15 09:59:53.881600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:37.341 [2024-07-15 09:59:53.881614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:37.341 [2024-07-15 09:59:53.881627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:37.341 [2024-07-15 09:59:53.881646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:37.341 [2024-07-15 09:59:53.881660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:37.341 [2024-07-15 09:59:53.881672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:37.341 [2024-07-15 09:59:53.881689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:37.341 [2024-07-15 09:59:53.881702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:37.341 [2024-07-15 09:59:53.881714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:37.341 [2024-07-15 09:59:53.881732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:37.341 [2024-07-15 09:59:53.881746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:37.341 [2024-07-15 09:59:53.881759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:37.341 [2024-07-15 09:59:53.882894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.882919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.882944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.882959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.882975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.882989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.341 [2024-07-15 09:59:53.883436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.341 [2024-07-15 09:59:53.883453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.883980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.883996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.342 [2024-07-15 09:59:53.884800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.342 [2024-07-15 09:59:53.884814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9380 is same with the state(5) to be set 00:26:37.342 [2024-07-15 09:59:53.886509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.342 [2024-07-15 09:59:53.886534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.342 [2024-07-15 09:59:53.886547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.342 [2024-07-15 09:59:53.886559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.342 [2024-07-15 09:59:53.886571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.342 task offset: 19200 on job bdev=Nvme7n1 fails 00:26:37.342 00:26:37.342 Latency(us) 00:26:37.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.342 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.342 Job: Nvme1n1 ended in about 0.77 seconds with error 00:26:37.342 Verification LBA range: start 0x0 length 0x400 00:26:37.342 Nvme1n1 : 0.77 166.63 10.41 83.31 0.00 252720.17 17767.54 250104.79 00:26:37.342 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.342 Job: Nvme2n1 ended in about 0.77 seconds with error 00:26:37.342 Verification LBA range: start 0x0 length 0x400 00:26:37.342 Nvme2n1 : 0.77 178.91 11.18 82.97 0.00 235411.34 10582.85 248551.35 00:26:37.342 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.342 Job: Nvme3n1 ended in about 0.77 seconds with error 00:26:37.342 Verification LBA range: start 0x0 length 0x400 00:26:37.342 Nvme3n1 : 0.77 82.63 5.16 82.63 0.00 364108.61 23495.87 298261.62 00:26:37.342 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.342 Job: Nvme4n1 ended in about 0.78 seconds with error 00:26:37.342 Verification LBA range: start 0x0 length 0x400 00:26:37.342 Nvme4n1 : 0.78 171.02 10.69 82.29 0.00 231636.43 25437.68 226803.11 00:26:37.342 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.342 Job: Nvme5n1 ended in about 0.76 seconds with error 00:26:37.342 Verification LBA range: start 0x0 length 0x400 00:26:37.342 Nvme5n1 : 0.76 168.06 10.50 84.03 0.00 226167.47 18544.26 251658.24 00:26:37.342 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.342 Job: Nvme6n1 ended in about 0.78 seconds with error 00:26:37.342 Verification LBA range: start 0x0 length 0x400 00:26:37.342 Nvme6n1 : 0.78 163.37 10.21 81.69 0.00 227379.07 34758.35 253211.69 00:26:37.342 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.342 Job: Nvme7n1 ended in about 0.76 seconds with error 00:26:37.342 Verification LBA range: start 0x0 length 0x400 00:26:37.342 Nvme7n1 : 0.76 169.10 10.57 84.55 0.00 212591.50 13398.47 245444.46 00:26:37.342 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.342 Job: Nvme8n1 ended in about 0.76 seconds with error 00:26:37.342 Verification LBA range: start 0x0 length 0x400 00:26:37.342 Nvme8n1 : 0.76 168.85 10.55 84.43 0.00 207060.39 53982.25 226803.11 00:26:37.342 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.342 Job: Nvme9n1 ended in about 0.76 seconds with error 00:26:37.342 Verification LBA range: start 0x0 length 0x400 00:26:37.342 Nvme9n1 : 0.76 168.62 10.54 84.31 0.00 201547.98 36117.62 240784.12 00:26:37.342 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.342 Job: Nvme10n1 ended in about 0.76 seconds with error 00:26:37.342 Verification LBA range: start 0x0 length 0x400 00:26:37.342 Nvme10n1 : 0.76 168.38 10.52 84.19 0.00 195998.40 18932.62 256318.58 00:26:37.342 =================================================================================================================== 00:26:37.342 Total : 1605.57 100.35 834.40 0.00 231051.12 10582.85 298261.62 00:26:37.342 [2024-07-15 09:59:53.914444] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:37.342 [2024-07-15 09:59:53.914525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:37.343 [2024-07-15 09:59:53.914596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116ef10 (9): Bad file descriptor 00:26:37.343 [2024-07-15 09:59:53.914624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133aa90 (9): Bad file descriptor 00:26:37.343 [2024-07-15 09:59:53.914643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1342140 (9): Bad file descriptor 00:26:37.343 [2024-07-15 09:59:53.914661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ab740 (9): Bad file descriptor 00:26:37.343 [2024-07-15 09:59:53.915156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.343 [2024-07-15 09:59:53.915204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc64610 with addr=10.0.0.2, port=4420 00:26:37.343 [2024-07-15 09:59:53.915223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64610 is same with the state(5) to be set 00:26:37.343 [2024-07-15 09:59:53.915238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:37.343 [2024-07-15 09:59:53.915251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:37.343 [2024-07-15 09:59:53.915266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:37.343 [2024-07-15 09:59:53.915288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:37.343 [2024-07-15 09:59:53.915315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:37.343 [2024-07-15 09:59:53.915328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:37.343 [2024-07-15 09:59:53.915347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:37.343 [2024-07-15 09:59:53.915362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:37.343 [2024-07-15 09:59:53.915374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:37.343 [2024-07-15 09:59:53.915391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:37.343 [2024-07-15 09:59:53.915404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:37.343 [2024-07-15 09:59:53.915417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:37.343 [2024-07-15 09:59:53.915473] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.343 [2024-07-15 09:59:53.915495] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.343 [2024-07-15 09:59:53.915512] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.343 [2024-07-15 09:59:53.915543] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.343 [2024-07-15 09:59:53.915926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.343 [2024-07-15 09:59:53.915951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.343 [2024-07-15 09:59:53.915964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.343 [2024-07-15 09:59:53.915976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.343 [2024-07-15 09:59:53.916003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc64610 (9): Bad file descriptor 00:26:37.343 [2024-07-15 09:59:53.916070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:37.343 [2024-07-15 09:59:53.916095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:37.343 [2024-07-15 09:59:53.916113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:37.343 [2024-07-15 09:59:53.916148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:37.343 [2024-07-15 09:59:53.916164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:37.343 [2024-07-15 09:59:53.916179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:37.343 [2024-07-15 09:59:53.916218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:37.343 [2024-07-15 09:59:53.916240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:37.343 [2024-07-15 09:59:53.916266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.343 [2024-07-15 09:59:53.916417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.343 [2024-07-15 09:59:53.916447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339e80 with addr=10.0.0.2, port=4420 00:26:37.343 [2024-07-15 09:59:53.916464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339e80 is same with the state(5) to be set 00:26:37.343 [2024-07-15 09:59:53.916603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.343 [2024-07-15 09:59:53.916629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1010 with addr=10.0.0.2, port=4420 00:26:37.343 [2024-07-15 09:59:53.916651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1010 is same with the state(5) to be set 00:26:37.343 [2024-07-15 09:59:53.916774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.343 [2024-07-15 09:59:53.916800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1338600 with addr=10.0.0.2, port=4420 00:26:37.343 [2024-07-15 09:59:53.916815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1338600 is same with the state(5) to be set 00:26:37.343 [2024-07-15 09:59:53.916988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.343 [2024-07-15 09:59:53.917016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339320 with addr=10.0.0.2, port=4420 00:26:37.343 [2024-07-15 09:59:53.917032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339320 is same with the state(5) to be set 00:26:37.343 [2024-07-15 09:59:53.917148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.343 [2024-07-15 09:59:53.917174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7b40 with addr=10.0.0.2, port=4420 00:26:37.343 [2024-07-15 09:59:53.917189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7b40 is same with the state(5) to be set 00:26:37.343 [2024-07-15 09:59:53.917211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339e80 (9): Bad file descriptor 00:26:37.343 [2024-07-15 09:59:53.917229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1010 (9): Bad file descriptor 00:26:37.343 [2024-07-15 09:59:53.917246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1338600 (9): Bad file descriptor 00:26:37.343 [2024-07-15 09:59:53.917287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339320 (9): Bad file descriptor 00:26:37.343 [2024-07-15 09:59:53.917310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7b40 (9): Bad file descriptor 00:26:37.343 [2024-07-15 09:59:53.917326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:37.343 [2024-07-15 09:59:53.917338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:37.343 [2024-07-15 09:59:53.917350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:37.343 [2024-07-15 09:59:53.917367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:37.343 [2024-07-15 09:59:53.917381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:37.343 [2024-07-15 09:59:53.917393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:37.343 [2024-07-15 09:59:53.917408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:37.343 [2024-07-15 09:59:53.917421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:37.343 [2024-07-15 09:59:53.917434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:37.343 [2024-07-15 09:59:53.917471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.343 [2024-07-15 09:59:53.917488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.343 [2024-07-15 09:59:53.917500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.343 [2024-07-15 09:59:53.917512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:37.343 [2024-07-15 09:59:53.917524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:37.343 [2024-07-15 09:59:53.917541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:37.343 [2024-07-15 09:59:53.917559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:37.343 [2024-07-15 09:59:53.917572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:37.343 [2024-07-15 09:59:53.917585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:37.343 [2024-07-15 09:59:53.917628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.343 [2024-07-15 09:59:53.917647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.913 09:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:26:37.913 09:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1984785 00:26:38.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1984785) - No such process 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:38.848 rmmod nvme_tcp 00:26:38.848 rmmod nvme_fabrics 00:26:38.848 rmmod nvme_keyring 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.848 09:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.747 09:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:40.747 00:26:40.747 real 0m7.202s 00:26:40.747 user 0m16.875s 00:26:40.747 sys 0m1.329s 00:26:40.747 09:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:40.747 09:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.747 ************************************ 00:26:40.747 END TEST nvmf_shutdown_tc3 00:26:40.747 ************************************ 00:26:40.747 09:59:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:26:40.747 09:59:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:26:40.747 00:26:40.747 real 0m26.821s 00:26:40.747 user 1m14.680s 00:26:40.747 sys 0m6.112s 00:26:40.747 09:59:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:40.747 09:59:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:40.747 ************************************ 00:26:40.747 END TEST nvmf_shutdown 00:26:40.747 ************************************ 00:26:41.006 09:59:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:41.006 09:59:57 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:26:41.006 09:59:57 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:41.006 09:59:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.006 09:59:57 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:26:41.006 09:59:57 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:41.006 09:59:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.006 09:59:57 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:26:41.006 09:59:57 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:41.006 09:59:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:41.006 09:59:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.006 09:59:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.006 ************************************ 00:26:41.006 START TEST nvmf_multicontroller 00:26:41.006 ************************************ 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:41.006 * Looking for test storage... 00:26:41.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:26:41.006 09:59:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:42.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:42.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:42.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:42.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.919 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:42.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:26:42.919 00:26:42.919 --- 10.0.0.2 ping statistics --- 00:26:42.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.920 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:26:42.920 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:26:42.920 00:26:42.920 --- 10.0.0.1 ping statistics --- 00:26:42.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.920 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:42.920 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.920 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:26:42.920 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:42.920 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.920 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:42.920 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:42.920 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.920 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:42.920 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1987283 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1987283 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1987283 ']' 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:43.181 09:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.181 [2024-07-15 09:59:59.769636] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:43.181 [2024-07-15 09:59:59.769712] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.181 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.181 [2024-07-15 09:59:59.806775] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:43.181 [2024-07-15 09:59:59.834411] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:43.181 [2024-07-15 09:59:59.919631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.181 [2024-07-15 09:59:59.919686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.181 [2024-07-15 09:59:59.919702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.181 [2024-07-15 09:59:59.919716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.181 [2024-07-15 09:59:59.919728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.181 [2024-07-15 09:59:59.919813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.181 [2024-07-15 09:59:59.919997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.181 [2024-07-15 09:59:59.920001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.440 [2024-07-15 10:00:00.061318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.440 Malloc0 00:26:43.440 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.441 [2024-07-15 10:00:00.125186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.441 [2024-07-15 10:00:00.133058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.441 Malloc1 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1987333 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1987333 /var/tmp/bdevperf.sock 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1987333 ']' 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:43.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:43.441 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.010 NVMe0n1 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.010 1 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.010 request: 00:26:44.010 { 00:26:44.010 "name": "NVMe0", 00:26:44.010 "trtype": "tcp", 00:26:44.010 "traddr": "10.0.0.2", 00:26:44.010 "adrfam": "ipv4", 00:26:44.010 "trsvcid": "4420", 00:26:44.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:44.010 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:44.010 "hostaddr": "10.0.0.2", 00:26:44.010 "hostsvcid": "60000", 00:26:44.010 "prchk_reftag": false, 00:26:44.010 "prchk_guard": false, 00:26:44.010 "hdgst": false, 00:26:44.010 "ddgst": false, 00:26:44.010 "method": "bdev_nvme_attach_controller", 00:26:44.010 "req_id": 1 00:26:44.010 } 00:26:44.010 Got JSON-RPC error response 00:26:44.010 response: 00:26:44.010 { 00:26:44.010 "code": -114, 00:26:44.010 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:44.010 } 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.010 request: 00:26:44.010 { 00:26:44.010 "name": "NVMe0", 00:26:44.010 "trtype": "tcp", 00:26:44.010 "traddr": "10.0.0.2", 00:26:44.010 "adrfam": "ipv4", 00:26:44.010 "trsvcid": "4420", 00:26:44.010 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:44.010 "hostaddr": "10.0.0.2", 00:26:44.010 "hostsvcid": "60000", 00:26:44.010 "prchk_reftag": false, 00:26:44.010 "prchk_guard": false, 00:26:44.010 "hdgst": false, 00:26:44.010 "ddgst": false, 00:26:44.010 "method": "bdev_nvme_attach_controller", 00:26:44.010 "req_id": 1 00:26:44.010 } 00:26:44.010 Got JSON-RPC error response 00:26:44.010 response: 00:26:44.010 { 00:26:44.010 "code": -114, 00:26:44.010 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:44.010 } 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.010 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.010 request: 00:26:44.010 { 00:26:44.010 "name": "NVMe0", 00:26:44.010 "trtype": "tcp", 00:26:44.010 "traddr": "10.0.0.2", 00:26:44.010 "adrfam": "ipv4", 00:26:44.010 "trsvcid": "4420", 00:26:44.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:44.010 "hostaddr": "10.0.0.2", 00:26:44.010 "hostsvcid": "60000", 00:26:44.010 "prchk_reftag": false, 00:26:44.010 "prchk_guard": false, 00:26:44.010 "hdgst": false, 00:26:44.010 "ddgst": false, 00:26:44.010 "multipath": "disable", 00:26:44.010 "method": "bdev_nvme_attach_controller", 00:26:44.010 "req_id": 1 00:26:44.010 } 00:26:44.010 Got JSON-RPC error response 00:26:44.010 response: 00:26:44.010 { 00:26:44.010 "code": -114, 00:26:44.010 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:44.010 } 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.011 request: 00:26:44.011 { 00:26:44.011 "name": "NVMe0", 00:26:44.011 "trtype": "tcp", 00:26:44.011 "traddr": "10.0.0.2", 00:26:44.011 "adrfam": "ipv4", 00:26:44.011 "trsvcid": "4420", 00:26:44.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:44.011 "hostaddr": "10.0.0.2", 00:26:44.011 "hostsvcid": "60000", 00:26:44.011 "prchk_reftag": false, 00:26:44.011 "prchk_guard": false, 00:26:44.011 "hdgst": false, 00:26:44.011 "ddgst": false, 00:26:44.011 "multipath": "failover", 00:26:44.011 "method": "bdev_nvme_attach_controller", 00:26:44.011 "req_id": 1 00:26:44.011 } 00:26:44.011 Got JSON-RPC error response 00:26:44.011 response: 00:26:44.011 { 00:26:44.011 "code": -114, 00:26:44.011 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:44.011 } 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.011 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.011 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.269 00:26:44.269 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.269 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:44.269 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:44.269 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.269 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.269 10:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.269 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:44.269 10:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:45.203 0 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1987333 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1987333 ']' 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1987333 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1987333 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1987333' 00:26:45.203 killing process with pid 1987333 00:26:45.203 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1987333 00:26:45.204 10:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1987333 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:26:45.461 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:45.461 [2024-07-15 10:00:00.236512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:45.461 [2024-07-15 10:00:00.236607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987333 ] 00:26:45.461 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.461 [2024-07-15 10:00:00.270465] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:45.461 [2024-07-15 10:00:00.299777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.461 [2024-07-15 10:00:00.386101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.461 [2024-07-15 10:00:00.805045] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 212f3090-fe79-41b3-86d6-c83082c4c83c already exists 00:26:45.461 [2024-07-15 10:00:00.805085] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:212f3090-fe79-41b3-86d6-c83082c4c83c alias for bdev NVMe1n1 00:26:45.461 [2024-07-15 10:00:00.805100] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:45.461 Running I/O for 1 seconds... 00:26:45.461 00:26:45.461 Latency(us) 00:26:45.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.461 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:45.461 NVMe0n1 : 1.00 19463.74 76.03 0.00 0.00 6566.62 4102.07 14466.47 00:26:45.461 =================================================================================================================== 00:26:45.461 Total : 19463.74 76.03 0.00 0.00 6566.62 4102.07 14466.47 00:26:45.461 Received shutdown signal, test time was about 1.000000 seconds 00:26:45.461 00:26:45.461 Latency(us) 00:26:45.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.461 =================================================================================================================== 00:26:45.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.461 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.461 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.718 rmmod nvme_tcp 00:26:45.718 rmmod nvme_fabrics 00:26:45.718 rmmod nvme_keyring 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1987283 ']' 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1987283 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1987283 ']' 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1987283 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1987283 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1987283' 00:26:45.718 killing process with pid 1987283 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1987283 00:26:45.718 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1987283 00:26:45.975 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:45.975 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.975 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.975 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.975 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.975 10:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.975 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.975 10:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.882 10:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:47.882 00:26:47.882 real 0m7.063s 00:26:47.882 user 0m10.744s 00:26:47.882 sys 0m2.144s 00:26:47.882 10:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:47.882 10:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.882 ************************************ 00:26:47.882 END TEST nvmf_multicontroller 00:26:47.882 ************************************ 00:26:48.160 10:00:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:48.160 10:00:04 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:48.160 10:00:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:48.160 10:00:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.160 10:00:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.160 ************************************ 00:26:48.160 START TEST nvmf_aer 00:26:48.160 ************************************ 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:48.160 * Looking for test storage... 00:26:48.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:26:48.160 10:00:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:50.068 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:50.068 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:50.068 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:50.068 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.068 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:50.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:26:50.069 00:26:50.069 --- 10.0.0.2 ping statistics --- 00:26:50.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.069 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:26:50.069 00:26:50.069 --- 10.0.0.1 ping statistics --- 00:26:50.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.069 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1989622 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1989622 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1989622 ']' 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.069 10:00:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.069 [2024-07-15 10:00:06.850045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:50.069 [2024-07-15 10:00:06.850125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.326 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.326 [2024-07-15 10:00:06.898833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:50.326 [2024-07-15 10:00:06.926055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.326 [2024-07-15 10:00:07.014264] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.326 [2024-07-15 10:00:07.014326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.326 [2024-07-15 10:00:07.014354] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.326 [2024-07-15 10:00:07.014365] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.326 [2024-07-15 10:00:07.014375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.326 [2024-07-15 10:00:07.014505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.326 [2024-07-15 10:00:07.014530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.326 [2024-07-15 10:00:07.014590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.326 [2024-07-15 10:00:07.014592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.584 [2024-07-15 10:00:07.151653] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.584 Malloc0 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.584 [2024-07-15 10:00:07.202713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.584 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.585 [ 00:26:50.585 { 00:26:50.585 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:50.585 "subtype": "Discovery", 00:26:50.585 "listen_addresses": [], 00:26:50.585 "allow_any_host": true, 00:26:50.585 "hosts": [] 00:26:50.585 }, 00:26:50.585 { 00:26:50.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.585 "subtype": "NVMe", 00:26:50.585 "listen_addresses": [ 00:26:50.585 { 00:26:50.585 "trtype": "TCP", 00:26:50.585 "adrfam": "IPv4", 00:26:50.585 "traddr": "10.0.0.2", 00:26:50.585 "trsvcid": "4420" 00:26:50.585 } 00:26:50.585 ], 00:26:50.585 "allow_any_host": true, 00:26:50.585 "hosts": [], 00:26:50.585 "serial_number": "SPDK00000000000001", 00:26:50.585 "model_number": "SPDK bdev Controller", 00:26:50.585 "max_namespaces": 2, 00:26:50.585 "min_cntlid": 1, 00:26:50.585 "max_cntlid": 65519, 00:26:50.585 "namespaces": [ 00:26:50.585 { 00:26:50.585 "nsid": 1, 00:26:50.585 "bdev_name": "Malloc0", 00:26:50.585 "name": "Malloc0", 00:26:50.585 "nguid": "ACBBDB5C15A94460AC207C1966D7A863", 00:26:50.585 "uuid": "acbbdb5c-15a9-4460-ac20-7c1966d7a863" 00:26:50.585 } 00:26:50.585 ] 00:26:50.585 } 00:26:50.585 ] 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1989665 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:50.585 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:26:50.585 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.843 Malloc1 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:50.843 [ 00:26:50.843 { 00:26:50.843 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:50.843 "subtype": "Discovery", 00:26:50.843 "listen_addresses": [], 00:26:50.843 "allow_any_host": true, 00:26:50.843 "hosts": [] 00:26:50.843 }, 00:26:50.843 { 00:26:50.843 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.843 "subtype": "NVMe", 00:26:50.843 "listen_addresses": [ 00:26:50.843 { 00:26:50.843 "trtype": "TCP", 00:26:50.843 "adrfam": "IPv4", 00:26:50.843 "traddr": "10.0.0.2", 00:26:50.843 "trsvcid": "4420" 00:26:50.843 } 00:26:50.843 ], 00:26:50.843 "allow_any_host": true, 00:26:50.843 "hosts": [], 00:26:50.843 "serial_number": "SPDK00000000000001", 00:26:50.843 "model_number": "SPDK bdev Controller", 00:26:50.843 "max_namespaces": 2, 00:26:50.843 "min_cntlid": 1, 00:26:50.843 "max_cntlid": 65519, 00:26:50.843 "namespaces": [ 00:26:50.843 { 00:26:50.843 "nsid": 1, 00:26:50.843 "bdev_name": "Malloc0", 00:26:50.843 "name": "Malloc0", 00:26:50.843 "nguid": "ACBBDB5C15A94460AC207C1966D7A863", 00:26:50.843 "uuid": "acbbdb5c-15a9-4460-ac20-7c1966d7a863" 00:26:50.843 }, 00:26:50.843 { 00:26:50.843 "nsid": 2, 00:26:50.843 "bdev_name": "Malloc1", 00:26:50.843 "name": "Malloc1", 00:26:50.843 "nguid": "63EF6BC27DE8417B94D86644FB1F8150", 00:26:50.843 "uuid": "63ef6bc2-7de8-417b-94d8-6644fb1f8150" 00:26:50.843 } 00:26:50.843 ] 00:26:50.843 } 00:26:50.843 ] 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1989665 00:26:50.843 Asynchronous Event Request test 00:26:50.843 Attaching to 10.0.0.2 00:26:50.843 Attached to 10.0.0.2 00:26:50.843 Registering asynchronous event callbacks... 00:26:50.843 Starting namespace attribute notice tests for all controllers... 00:26:50.843 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:50.843 aer_cb - Changed Namespace 00:26:50.843 Cleaning up... 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.843 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.102 rmmod nvme_tcp 00:26:51.102 rmmod nvme_fabrics 00:26:51.102 rmmod nvme_keyring 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1989622 ']' 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1989622 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1989622 ']' 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1989622 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1989622 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1989622' 00:26:51.102 killing process with pid 1989622 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1989622 00:26:51.102 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1989622 00:26:51.361 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:51.361 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.361 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.361 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.361 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.361 10:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.361 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.361 10:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.257 10:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.257 00:26:53.257 real 0m5.298s 00:26:53.257 user 0m4.350s 00:26:53.257 sys 0m1.828s 00:26:53.257 10:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.257 10:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.257 ************************************ 00:26:53.257 END TEST nvmf_aer 00:26:53.257 ************************************ 00:26:53.257 10:00:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:53.257 10:00:10 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:53.257 10:00:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:53.257 10:00:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.257 10:00:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.515 ************************************ 00:26:53.515 START TEST nvmf_async_init 00:26:53.515 ************************************ 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:53.515 * Looking for test storage... 00:26:53.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.515 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=deed060eb7c648c8aa4169e116311c71 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.516 10:00:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:55.439 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:55.439 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:55.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:55.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:26:55.439 00:26:55.439 --- 10.0.0.2 ping statistics --- 00:26:55.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.439 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:26:55.439 00:26:55.439 --- 10.0.0.1 ping statistics --- 00:26:55.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.439 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1992217 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1992217 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1992217 ']' 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:55.439 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.699 [2024-07-15 10:00:12.240058] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:55.699 [2024-07-15 10:00:12.240142] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.699 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.699 [2024-07-15 10:00:12.284698] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:55.699 [2024-07-15 10:00:12.317919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.699 [2024-07-15 10:00:12.411902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.699 [2024-07-15 10:00:12.411977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.699 [2024-07-15 10:00:12.412001] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.699 [2024-07-15 10:00:12.412015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.699 [2024-07-15 10:00:12.412036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.699 [2024-07-15 10:00:12.412076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.957 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.957 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:26:55.957 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.958 [2024-07-15 10:00:12.559507] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.958 null0 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g deed060eb7c648c8aa4169e116311c71 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.958 [2024-07-15 10:00:12.599776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.958 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.218 nvme0n1 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.218 [ 00:26:56.218 { 00:26:56.218 "name": "nvme0n1", 00:26:56.218 "aliases": [ 00:26:56.218 "deed060e-b7c6-48c8-aa41-69e116311c71" 00:26:56.218 ], 00:26:56.218 "product_name": "NVMe disk", 00:26:56.218 "block_size": 512, 00:26:56.218 "num_blocks": 2097152, 00:26:56.218 "uuid": "deed060e-b7c6-48c8-aa41-69e116311c71", 00:26:56.218 "assigned_rate_limits": { 00:26:56.218 "rw_ios_per_sec": 0, 00:26:56.218 "rw_mbytes_per_sec": 0, 00:26:56.218 "r_mbytes_per_sec": 0, 00:26:56.218 "w_mbytes_per_sec": 0 00:26:56.218 }, 00:26:56.218 "claimed": false, 00:26:56.218 "zoned": false, 00:26:56.218 "supported_io_types": { 00:26:56.218 "read": true, 00:26:56.218 "write": true, 00:26:56.218 "unmap": false, 00:26:56.218 "flush": true, 00:26:56.218 "reset": true, 00:26:56.218 "nvme_admin": true, 00:26:56.218 "nvme_io": true, 00:26:56.218 "nvme_io_md": false, 00:26:56.218 "write_zeroes": true, 00:26:56.218 "zcopy": false, 00:26:56.218 "get_zone_info": false, 00:26:56.218 "zone_management": false, 00:26:56.218 "zone_append": false, 00:26:56.218 "compare": true, 00:26:56.218 "compare_and_write": true, 00:26:56.218 "abort": true, 00:26:56.218 "seek_hole": false, 00:26:56.218 "seek_data": false, 00:26:56.218 "copy": true, 00:26:56.218 "nvme_iov_md": false 00:26:56.218 }, 00:26:56.218 "memory_domains": [ 00:26:56.218 { 00:26:56.218 "dma_device_id": "system", 00:26:56.218 "dma_device_type": 1 00:26:56.218 } 00:26:56.218 ], 00:26:56.218 "driver_specific": { 00:26:56.218 "nvme": [ 00:26:56.218 { 00:26:56.218 "trid": { 00:26:56.218 "trtype": "TCP", 00:26:56.218 "adrfam": "IPv4", 00:26:56.218 "traddr": "10.0.0.2", 00:26:56.218 "trsvcid": "4420", 00:26:56.218 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:56.218 }, 00:26:56.218 "ctrlr_data": { 00:26:56.218 "cntlid": 1, 00:26:56.218 "vendor_id": "0x8086", 00:26:56.218 "model_number": "SPDK bdev Controller", 00:26:56.218 "serial_number": "00000000000000000000", 00:26:56.218 "firmware_revision": "24.09", 00:26:56.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.218 "oacs": { 00:26:56.218 "security": 0, 00:26:56.218 "format": 0, 00:26:56.218 "firmware": 0, 00:26:56.218 "ns_manage": 0 00:26:56.218 }, 00:26:56.218 "multi_ctrlr": true, 00:26:56.218 "ana_reporting": false 00:26:56.218 }, 00:26:56.218 "vs": { 00:26:56.218 "nvme_version": "1.3" 00:26:56.218 }, 00:26:56.218 "ns_data": { 00:26:56.218 "id": 1, 00:26:56.218 "can_share": true 00:26:56.218 } 00:26:56.218 } 00:26:56.218 ], 00:26:56.218 "mp_policy": "active_passive" 00:26:56.218 } 00:26:56.218 } 00:26:56.218 ] 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.218 [2024-07-15 10:00:12.852855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:56.218 [2024-07-15 10:00:12.852961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26950b0 (9): Bad file descriptor 00:26:56.218 [2024-07-15 10:00:12.995033] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.218 10:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.477 [ 00:26:56.477 { 00:26:56.477 "name": "nvme0n1", 00:26:56.477 "aliases": [ 00:26:56.477 "deed060e-b7c6-48c8-aa41-69e116311c71" 00:26:56.477 ], 00:26:56.477 "product_name": "NVMe disk", 00:26:56.477 "block_size": 512, 00:26:56.477 "num_blocks": 2097152, 00:26:56.477 "uuid": "deed060e-b7c6-48c8-aa41-69e116311c71", 00:26:56.477 "assigned_rate_limits": { 00:26:56.477 "rw_ios_per_sec": 0, 00:26:56.477 "rw_mbytes_per_sec": 0, 00:26:56.477 "r_mbytes_per_sec": 0, 00:26:56.477 "w_mbytes_per_sec": 0 00:26:56.477 }, 00:26:56.477 "claimed": false, 00:26:56.477 "zoned": false, 00:26:56.477 "supported_io_types": { 00:26:56.477 "read": true, 00:26:56.477 "write": true, 00:26:56.477 "unmap": false, 00:26:56.477 "flush": true, 00:26:56.477 "reset": true, 00:26:56.477 "nvme_admin": true, 00:26:56.477 "nvme_io": true, 00:26:56.477 "nvme_io_md": false, 00:26:56.477 "write_zeroes": true, 00:26:56.477 "zcopy": false, 00:26:56.477 "get_zone_info": false, 00:26:56.477 "zone_management": false, 00:26:56.477 "zone_append": false, 00:26:56.477 "compare": true, 00:26:56.477 "compare_and_write": true, 00:26:56.477 "abort": true, 00:26:56.477 "seek_hole": false, 00:26:56.477 "seek_data": false, 00:26:56.477 "copy": true, 00:26:56.477 "nvme_iov_md": false 00:26:56.477 }, 00:26:56.477 "memory_domains": [ 00:26:56.477 { 00:26:56.477 "dma_device_id": "system", 00:26:56.477 "dma_device_type": 1 00:26:56.477 } 00:26:56.477 ], 00:26:56.477 "driver_specific": { 00:26:56.477 "nvme": [ 00:26:56.477 { 00:26:56.477 "trid": { 00:26:56.477 "trtype": "TCP", 00:26:56.477 "adrfam": "IPv4", 00:26:56.477 "traddr": "10.0.0.2", 00:26:56.477 "trsvcid": "4420", 00:26:56.477 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:56.477 }, 00:26:56.477 "ctrlr_data": { 00:26:56.477 "cntlid": 2, 00:26:56.477 "vendor_id": "0x8086", 00:26:56.477 "model_number": "SPDK bdev Controller", 00:26:56.477 "serial_number": "00000000000000000000", 00:26:56.477 "firmware_revision": "24.09", 00:26:56.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.477 "oacs": { 00:26:56.477 "security": 0, 00:26:56.477 "format": 0, 00:26:56.477 "firmware": 0, 00:26:56.477 "ns_manage": 0 00:26:56.477 }, 00:26:56.477 "multi_ctrlr": true, 00:26:56.477 "ana_reporting": false 00:26:56.477 }, 00:26:56.477 "vs": { 00:26:56.477 "nvme_version": "1.3" 00:26:56.477 }, 00:26:56.477 "ns_data": { 00:26:56.477 "id": 1, 00:26:56.477 "can_share": true 00:26:56.477 } 00:26:56.477 } 00:26:56.477 ], 00:26:56.477 "mp_policy": "active_passive" 00:26:56.477 } 00:26:56.477 } 00:26:56.477 ] 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ZUoNzXcDgW 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ZUoNzXcDgW 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.477 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.477 [2024-07-15 10:00:13.045663] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:56.478 [2024-07-15 10:00:13.045790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZUoNzXcDgW 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.478 [2024-07-15 10:00:13.053683] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZUoNzXcDgW 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.478 [2024-07-15 10:00:13.061710] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:56.478 [2024-07-15 10:00:13.061769] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:56.478 nvme0n1 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.478 [ 00:26:56.478 { 00:26:56.478 "name": "nvme0n1", 00:26:56.478 "aliases": [ 00:26:56.478 "deed060e-b7c6-48c8-aa41-69e116311c71" 00:26:56.478 ], 00:26:56.478 "product_name": "NVMe disk", 00:26:56.478 "block_size": 512, 00:26:56.478 "num_blocks": 2097152, 00:26:56.478 "uuid": "deed060e-b7c6-48c8-aa41-69e116311c71", 00:26:56.478 "assigned_rate_limits": { 00:26:56.478 "rw_ios_per_sec": 0, 00:26:56.478 "rw_mbytes_per_sec": 0, 00:26:56.478 "r_mbytes_per_sec": 0, 00:26:56.478 "w_mbytes_per_sec": 0 00:26:56.478 }, 00:26:56.478 "claimed": false, 00:26:56.478 "zoned": false, 00:26:56.478 "supported_io_types": { 00:26:56.478 "read": true, 00:26:56.478 "write": true, 00:26:56.478 "unmap": false, 00:26:56.478 "flush": true, 00:26:56.478 "reset": true, 00:26:56.478 "nvme_admin": true, 00:26:56.478 "nvme_io": true, 00:26:56.478 "nvme_io_md": false, 00:26:56.478 "write_zeroes": true, 00:26:56.478 "zcopy": false, 00:26:56.478 "get_zone_info": false, 00:26:56.478 "zone_management": false, 00:26:56.478 "zone_append": false, 00:26:56.478 "compare": true, 00:26:56.478 "compare_and_write": true, 00:26:56.478 "abort": true, 00:26:56.478 "seek_hole": false, 00:26:56.478 "seek_data": false, 00:26:56.478 "copy": true, 00:26:56.478 "nvme_iov_md": false 00:26:56.478 }, 00:26:56.478 "memory_domains": [ 00:26:56.478 { 00:26:56.478 "dma_device_id": "system", 00:26:56.478 "dma_device_type": 1 00:26:56.478 } 00:26:56.478 ], 00:26:56.478 "driver_specific": { 00:26:56.478 "nvme": [ 00:26:56.478 { 00:26:56.478 "trid": { 00:26:56.478 "trtype": "TCP", 00:26:56.478 "adrfam": "IPv4", 00:26:56.478 "traddr": "10.0.0.2", 00:26:56.478 "trsvcid": "4421", 00:26:56.478 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:56.478 }, 00:26:56.478 "ctrlr_data": { 00:26:56.478 "cntlid": 3, 00:26:56.478 "vendor_id": "0x8086", 00:26:56.478 "model_number": "SPDK bdev Controller", 00:26:56.478 "serial_number": "00000000000000000000", 00:26:56.478 "firmware_revision": "24.09", 00:26:56.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.478 "oacs": { 00:26:56.478 "security": 0, 00:26:56.478 "format": 0, 00:26:56.478 "firmware": 0, 00:26:56.478 "ns_manage": 0 00:26:56.478 }, 00:26:56.478 "multi_ctrlr": true, 00:26:56.478 "ana_reporting": false 00:26:56.478 }, 00:26:56.478 "vs": { 00:26:56.478 "nvme_version": "1.3" 00:26:56.478 }, 00:26:56.478 "ns_data": { 00:26:56.478 "id": 1, 00:26:56.478 "can_share": true 00:26:56.478 } 00:26:56.478 } 00:26:56.478 ], 00:26:56.478 "mp_policy": "active_passive" 00:26:56.478 } 00:26:56.478 } 00:26:56.478 ] 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ZUoNzXcDgW 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.478 rmmod nvme_tcp 00:26:56.478 rmmod nvme_fabrics 00:26:56.478 rmmod nvme_keyring 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1992217 ']' 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1992217 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1992217 ']' 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1992217 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1992217 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1992217' 00:26:56.478 killing process with pid 1992217 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1992217 00:26:56.478 [2024-07-15 10:00:13.256631] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:56.478 [2024-07-15 10:00:13.256674] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:56.478 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1992217 00:26:56.735 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.735 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.735 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.735 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.735 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.735 10:00:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.735 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.735 10:00:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.286 10:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:59.286 00:26:59.286 real 0m5.426s 00:26:59.286 user 0m2.057s 00:26:59.286 sys 0m1.784s 00:26:59.286 10:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.286 10:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.286 ************************************ 00:26:59.286 END TEST nvmf_async_init 00:26:59.286 ************************************ 00:26:59.286 10:00:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:59.286 10:00:15 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:59.286 10:00:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:59.286 10:00:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.286 10:00:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:59.286 ************************************ 00:26:59.286 START TEST dma 00:26:59.286 ************************************ 00:26:59.286 10:00:15 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:59.286 * Looking for test storage... 00:26:59.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:59.286 10:00:15 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.286 10:00:15 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.286 10:00:15 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.286 10:00:15 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.286 10:00:15 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.286 10:00:15 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.286 10:00:15 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.286 10:00:15 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:26:59.286 10:00:15 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:59.286 10:00:15 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:59.286 10:00:15 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:59.286 10:00:15 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:26:59.286 00:26:59.286 real 0m0.064s 00:26:59.286 user 0m0.031s 00:26:59.286 sys 0m0.038s 00:26:59.286 10:00:15 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.286 10:00:15 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:26:59.286 ************************************ 00:26:59.286 END TEST dma 00:26:59.286 ************************************ 00:26:59.286 10:00:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:59.286 10:00:15 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:59.286 10:00:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:59.286 10:00:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.286 10:00:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:59.286 ************************************ 00:26:59.286 START TEST nvmf_identify 00:26:59.286 ************************************ 00:26:59.286 10:00:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:59.286 * Looking for test storage... 00:26:59.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:59.286 10:00:15 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.286 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:59.286 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.286 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.286 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.286 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:26:59.287 10:00:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:01.193 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:01.193 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:01.193 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:01.193 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:01.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:27:01.193 00:27:01.193 --- 10.0.0.2 ping statistics --- 00:27:01.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.193 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:27:01.193 00:27:01.193 --- 10.0.0.1 ping statistics --- 00:27:01.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.193 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1994286 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1994286 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1994286 ']' 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:01.193 10:00:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.193 [2024-07-15 10:00:17.852749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:01.193 [2024-07-15 10:00:17.852835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.193 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.193 [2024-07-15 10:00:17.893957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:01.193 [2024-07-15 10:00:17.925457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.449 [2024-07-15 10:00:18.019460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.449 [2024-07-15 10:00:18.019519] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.449 [2024-07-15 10:00:18.019534] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.450 [2024-07-15 10:00:18.019548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.450 [2024-07-15 10:00:18.019559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.450 [2024-07-15 10:00:18.019631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.450 [2024-07-15 10:00:18.019686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.450 [2024-07-15 10:00:18.019805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.450 [2024-07-15 10:00:18.019808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.450 [2024-07-15 10:00:18.140438] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.450 Malloc0 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.450 [2024-07-15 10:00:18.211357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.450 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.450 [ 00:27:01.450 { 00:27:01.450 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:01.450 "subtype": "Discovery", 00:27:01.450 "listen_addresses": [ 00:27:01.450 { 00:27:01.450 "trtype": "TCP", 00:27:01.450 "adrfam": "IPv4", 00:27:01.450 "traddr": "10.0.0.2", 00:27:01.450 "trsvcid": "4420" 00:27:01.450 } 00:27:01.450 ], 00:27:01.450 "allow_any_host": true, 00:27:01.450 "hosts": [] 00:27:01.450 }, 00:27:01.450 { 00:27:01.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.450 "subtype": "NVMe", 00:27:01.450 "listen_addresses": [ 00:27:01.450 { 00:27:01.450 "trtype": "TCP", 00:27:01.450 "adrfam": "IPv4", 00:27:01.450 "traddr": "10.0.0.2", 00:27:01.450 "trsvcid": "4420" 00:27:01.450 } 00:27:01.450 ], 00:27:01.450 "allow_any_host": true, 00:27:01.450 "hosts": [], 00:27:01.450 "serial_number": "SPDK00000000000001", 00:27:01.450 "model_number": "SPDK bdev Controller", 00:27:01.450 "max_namespaces": 32, 00:27:01.450 "min_cntlid": 1, 00:27:01.450 "max_cntlid": 65519, 00:27:01.450 "namespaces": [ 00:27:01.450 { 00:27:01.450 "nsid": 1, 00:27:01.712 "bdev_name": "Malloc0", 00:27:01.712 "name": "Malloc0", 00:27:01.712 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:01.712 "eui64": "ABCDEF0123456789", 00:27:01.712 "uuid": "31528aef-0c69-4dd2-8f44-9c5ff4375656" 00:27:01.712 } 00:27:01.712 ] 00:27:01.712 } 00:27:01.712 ] 00:27:01.712 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.712 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:01.712 [2024-07-15 10:00:18.253264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:01.712 [2024-07-15 10:00:18.253307] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994365 ] 00:27:01.712 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.712 [2024-07-15 10:00:18.270622] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:01.712 [2024-07-15 10:00:18.288213] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:01.712 [2024-07-15 10:00:18.288275] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:01.712 [2024-07-15 10:00:18.288284] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:01.712 [2024-07-15 10:00:18.288299] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:01.712 [2024-07-15 10:00:18.288309] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:01.712 [2024-07-15 10:00:18.291937] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:01.712 [2024-07-15 10:00:18.292008] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6d6630 0 00:27:01.712 [2024-07-15 10:00:18.298891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:01.712 [2024-07-15 10:00:18.298912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:01.712 [2024-07-15 10:00:18.298924] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:01.712 [2024-07-15 10:00:18.298931] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:01.712 [2024-07-15 10:00:18.298996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.299009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.299016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6630) 00:27:01.712 [2024-07-15 10:00:18.299033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:01.712 [2024-07-15 10:00:18.299059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724f80, cid 0, qid 0 00:27:01.712 [2024-07-15 10:00:18.306888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.712 [2024-07-15 10:00:18.306905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.712 [2024-07-15 10:00:18.306912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.306919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724f80) on tqpair=0x6d6630 00:27:01.712 [2024-07-15 10:00:18.306935] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:01.712 [2024-07-15 10:00:18.306961] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:01.712 [2024-07-15 10:00:18.306970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:01.712 [2024-07-15 10:00:18.306993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6630) 00:27:01.712 [2024-07-15 10:00:18.307019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.712 [2024-07-15 10:00:18.307043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724f80, cid 0, qid 0 00:27:01.712 [2024-07-15 10:00:18.307210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.712 [2024-07-15 10:00:18.307226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.712 [2024-07-15 10:00:18.307233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724f80) on tqpair=0x6d6630 00:27:01.712 [2024-07-15 10:00:18.307248] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:01.712 [2024-07-15 10:00:18.307261] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:01.712 [2024-07-15 10:00:18.307274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6630) 00:27:01.712 [2024-07-15 10:00:18.307298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.712 [2024-07-15 10:00:18.307320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724f80, cid 0, qid 0 00:27:01.712 [2024-07-15 10:00:18.307488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.712 [2024-07-15 10:00:18.307500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.712 [2024-07-15 10:00:18.307507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724f80) on tqpair=0x6d6630 00:27:01.712 [2024-07-15 10:00:18.307522] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:01.712 [2024-07-15 10:00:18.307541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:01.712 [2024-07-15 10:00:18.307554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6630) 00:27:01.712 [2024-07-15 10:00:18.307578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.712 [2024-07-15 10:00:18.307599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724f80, cid 0, qid 0 00:27:01.712 [2024-07-15 10:00:18.307744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.712 [2024-07-15 10:00:18.307760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.712 [2024-07-15 10:00:18.307767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724f80) on tqpair=0x6d6630 00:27:01.712 [2024-07-15 10:00:18.307783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:01.712 [2024-07-15 10:00:18.307799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6630) 00:27:01.712 [2024-07-15 10:00:18.307825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.712 [2024-07-15 10:00:18.307846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724f80, cid 0, qid 0 00:27:01.712 [2024-07-15 10:00:18.307962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.712 [2024-07-15 10:00:18.307976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.712 [2024-07-15 10:00:18.307983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.712 [2024-07-15 10:00:18.307989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724f80) on tqpair=0x6d6630 00:27:01.712 [2024-07-15 10:00:18.307998] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:01.713 [2024-07-15 10:00:18.308006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:01.713 [2024-07-15 10:00:18.308019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:01.713 [2024-07-15 10:00:18.308128] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:01.713 [2024-07-15 10:00:18.308137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:01.713 [2024-07-15 10:00:18.308150] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.308157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.308163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.308174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.713 [2024-07-15 10:00:18.308211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724f80, cid 0, qid 0 00:27:01.713 [2024-07-15 10:00:18.308426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.713 [2024-07-15 10:00:18.308442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.713 [2024-07-15 10:00:18.308449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.308460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724f80) on tqpair=0x6d6630 00:27:01.713 [2024-07-15 10:00:18.308469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:01.713 [2024-07-15 10:00:18.308486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.308495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.308502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.308512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.713 [2024-07-15 10:00:18.308532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724f80, cid 0, qid 0 00:27:01.713 [2024-07-15 10:00:18.308696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.713 [2024-07-15 10:00:18.308712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.713 [2024-07-15 10:00:18.308719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.308725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724f80) on tqpair=0x6d6630 00:27:01.713 [2024-07-15 10:00:18.308733] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:01.713 [2024-07-15 10:00:18.308741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:01.713 [2024-07-15 10:00:18.308755] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:01.713 [2024-07-15 10:00:18.308770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:01.713 [2024-07-15 10:00:18.308785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.308792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.308803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.713 [2024-07-15 10:00:18.308824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724f80, cid 0, qid 0 00:27:01.713 [2024-07-15 10:00:18.309011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.713 [2024-07-15 10:00:18.309027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.713 [2024-07-15 10:00:18.309034] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309041] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6630): datao=0, datal=4096, cccid=0 00:27:01.713 [2024-07-15 10:00:18.309049] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x724f80) on tqpair(0x6d6630): expected_datao=0, payload_size=4096 00:27:01.713 [2024-07-15 10:00:18.309057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309092] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309102] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.713 [2024-07-15 10:00:18.309226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.713 [2024-07-15 10:00:18.309233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724f80) on tqpair=0x6d6630 00:27:01.713 [2024-07-15 10:00:18.309251] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:01.713 [2024-07-15 10:00:18.309265] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:01.713 [2024-07-15 10:00:18.309276] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:01.713 [2024-07-15 10:00:18.309285] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:01.713 [2024-07-15 10:00:18.309293] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:01.713 [2024-07-15 10:00:18.309301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:01.713 [2024-07-15 10:00:18.309316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:01.713 [2024-07-15 10:00:18.309328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.309353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:01.713 [2024-07-15 10:00:18.309374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724f80, cid 0, qid 0 00:27:01.713 [2024-07-15 10:00:18.309538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.713 [2024-07-15 10:00:18.309553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.713 [2024-07-15 10:00:18.309560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724f80) on tqpair=0x6d6630 00:27:01.713 [2024-07-15 10:00:18.309578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.309602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.713 [2024-07-15 10:00:18.309612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.309633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.713 [2024-07-15 10:00:18.309643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.309664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.713 [2024-07-15 10:00:18.309673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.309694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.713 [2024-07-15 10:00:18.309703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:01.713 [2024-07-15 10:00:18.309722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:01.713 [2024-07-15 10:00:18.309739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.309747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.309757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.713 [2024-07-15 10:00:18.309794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724f80, cid 0, qid 0 00:27:01.713 [2024-07-15 10:00:18.309805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725100, cid 1, qid 0 00:27:01.713 [2024-07-15 10:00:18.309813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725280, cid 2, qid 0 00:27:01.713 [2024-07-15 10:00:18.309820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725400, cid 3, qid 0 00:27:01.713 [2024-07-15 10:00:18.309828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725580, cid 4, qid 0 00:27:01.713 [2024-07-15 10:00:18.310091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.713 [2024-07-15 10:00:18.310107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.713 [2024-07-15 10:00:18.310114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.310120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725580) on tqpair=0x6d6630 00:27:01.713 [2024-07-15 10:00:18.310129] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:01.713 [2024-07-15 10:00:18.310138] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:01.713 [2024-07-15 10:00:18.310156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.310165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.310176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.713 [2024-07-15 10:00:18.310197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725580, cid 4, qid 0 00:27:01.713 [2024-07-15 10:00:18.313888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.713 [2024-07-15 10:00:18.313904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.713 [2024-07-15 10:00:18.313911] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.313917] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6630): datao=0, datal=4096, cccid=4 00:27:01.713 [2024-07-15 10:00:18.313925] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x725580) on tqpair(0x6d6630): expected_datao=0, payload_size=4096 00:27:01.713 [2024-07-15 10:00:18.313932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.313941] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.313948] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.313957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.713 [2024-07-15 10:00:18.313965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.713 [2024-07-15 10:00:18.313971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.313978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725580) on tqpair=0x6d6630 00:27:01.713 [2024-07-15 10:00:18.313997] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:01.713 [2024-07-15 10:00:18.314050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.713 [2024-07-15 10:00:18.314060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6630) 00:27:01.713 [2024-07-15 10:00:18.314071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.713 [2024-07-15 10:00:18.314083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.314094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.314101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6d6630) 00:27:01.714 [2024-07-15 10:00:18.314110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.714 [2024-07-15 10:00:18.314137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725580, cid 4, qid 0 00:27:01.714 [2024-07-15 10:00:18.314149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725700, cid 5, qid 0 00:27:01.714 [2024-07-15 10:00:18.314367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.714 [2024-07-15 10:00:18.314380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.714 [2024-07-15 10:00:18.314387] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.314393] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6630): datao=0, datal=1024, cccid=4 00:27:01.714 [2024-07-15 10:00:18.314401] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x725580) on tqpair(0x6d6630): expected_datao=0, payload_size=1024 00:27:01.714 [2024-07-15 10:00:18.314408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.314418] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.314425] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.314434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.714 [2024-07-15 10:00:18.314443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.714 [2024-07-15 10:00:18.314449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.314456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725700) on tqpair=0x6d6630 00:27:01.714 [2024-07-15 10:00:18.355051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.714 [2024-07-15 10:00:18.355069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.714 [2024-07-15 10:00:18.355076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725580) on tqpair=0x6d6630 00:27:01.714 [2024-07-15 10:00:18.355101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6630) 00:27:01.714 [2024-07-15 10:00:18.355121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.714 [2024-07-15 10:00:18.355151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725580, cid 4, qid 0 00:27:01.714 [2024-07-15 10:00:18.355290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.714 [2024-07-15 10:00:18.355306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.714 [2024-07-15 10:00:18.355313] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355319] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6630): datao=0, datal=3072, cccid=4 00:27:01.714 [2024-07-15 10:00:18.355327] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x725580) on tqpair(0x6d6630): expected_datao=0, payload_size=3072 00:27:01.714 [2024-07-15 10:00:18.355334] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355352] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355361] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.714 [2024-07-15 10:00:18.355429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.714 [2024-07-15 10:00:18.355436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725580) on tqpair=0x6d6630 00:27:01.714 [2024-07-15 10:00:18.355464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6630) 00:27:01.714 [2024-07-15 10:00:18.355485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.714 [2024-07-15 10:00:18.355513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725580, cid 4, qid 0 00:27:01.714 [2024-07-15 10:00:18.355646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.714 [2024-07-15 10:00:18.355661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.714 [2024-07-15 10:00:18.355668] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355675] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6630): datao=0, datal=8, cccid=4 00:27:01.714 [2024-07-15 10:00:18.355682] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x725580) on tqpair(0x6d6630): expected_datao=0, payload_size=8 00:27:01.714 [2024-07-15 10:00:18.355690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355699] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.355706] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.396060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.714 [2024-07-15 10:00:18.396079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.714 [2024-07-15 10:00:18.396086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.714 [2024-07-15 10:00:18.396093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725580) on tqpair=0x6d6630 00:27:01.714 ===================================================== 00:27:01.714 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:01.714 ===================================================== 00:27:01.714 Controller Capabilities/Features 00:27:01.714 ================================ 00:27:01.714 Vendor ID: 0000 00:27:01.714 Subsystem Vendor ID: 0000 00:27:01.714 Serial Number: .................... 00:27:01.714 Model Number: ........................................ 00:27:01.714 Firmware Version: 24.09 00:27:01.714 Recommended Arb Burst: 0 00:27:01.714 IEEE OUI Identifier: 00 00 00 00:27:01.714 Multi-path I/O 00:27:01.714 May have multiple subsystem ports: No 00:27:01.714 May have multiple controllers: No 00:27:01.714 Associated with SR-IOV VF: No 00:27:01.714 Max Data Transfer Size: 131072 00:27:01.714 Max Number of Namespaces: 0 00:27:01.714 Max Number of I/O Queues: 1024 00:27:01.714 NVMe Specification Version (VS): 1.3 00:27:01.714 NVMe Specification Version (Identify): 1.3 00:27:01.714 Maximum Queue Entries: 128 00:27:01.714 Contiguous Queues Required: Yes 00:27:01.714 Arbitration Mechanisms Supported 00:27:01.714 Weighted Round Robin: Not Supported 00:27:01.714 Vendor Specific: Not Supported 00:27:01.714 Reset Timeout: 15000 ms 00:27:01.714 Doorbell Stride: 4 bytes 00:27:01.714 NVM Subsystem Reset: Not Supported 00:27:01.714 Command Sets Supported 00:27:01.714 NVM Command Set: Supported 00:27:01.714 Boot Partition: Not Supported 00:27:01.714 Memory Page Size Minimum: 4096 bytes 00:27:01.714 Memory Page Size Maximum: 4096 bytes 00:27:01.714 Persistent Memory Region: Not Supported 00:27:01.714 Optional Asynchronous Events Supported 00:27:01.714 Namespace Attribute Notices: Not Supported 00:27:01.714 Firmware Activation Notices: Not Supported 00:27:01.714 ANA Change Notices: Not Supported 00:27:01.714 PLE Aggregate Log Change Notices: Not Supported 00:27:01.714 LBA Status Info Alert Notices: Not Supported 00:27:01.714 EGE Aggregate Log Change Notices: Not Supported 00:27:01.714 Normal NVM Subsystem Shutdown event: Not Supported 00:27:01.714 Zone Descriptor Change Notices: Not Supported 00:27:01.714 Discovery Log Change Notices: Supported 00:27:01.714 Controller Attributes 00:27:01.714 128-bit Host Identifier: Not Supported 00:27:01.714 Non-Operational Permissive Mode: Not Supported 00:27:01.714 NVM Sets: Not Supported 00:27:01.714 Read Recovery Levels: Not Supported 00:27:01.714 Endurance Groups: Not Supported 00:27:01.714 Predictable Latency Mode: Not Supported 00:27:01.714 Traffic Based Keep ALive: Not Supported 00:27:01.714 Namespace Granularity: Not Supported 00:27:01.714 SQ Associations: Not Supported 00:27:01.714 UUID List: Not Supported 00:27:01.714 Multi-Domain Subsystem: Not Supported 00:27:01.714 Fixed Capacity Management: Not Supported 00:27:01.714 Variable Capacity Management: Not Supported 00:27:01.714 Delete Endurance Group: Not Supported 00:27:01.714 Delete NVM Set: Not Supported 00:27:01.714 Extended LBA Formats Supported: Not Supported 00:27:01.714 Flexible Data Placement Supported: Not Supported 00:27:01.714 00:27:01.714 Controller Memory Buffer Support 00:27:01.714 ================================ 00:27:01.714 Supported: No 00:27:01.714 00:27:01.714 Persistent Memory Region Support 00:27:01.714 ================================ 00:27:01.714 Supported: No 00:27:01.714 00:27:01.714 Admin Command Set Attributes 00:27:01.714 ============================ 00:27:01.714 Security Send/Receive: Not Supported 00:27:01.714 Format NVM: Not Supported 00:27:01.714 Firmware Activate/Download: Not Supported 00:27:01.714 Namespace Management: Not Supported 00:27:01.714 Device Self-Test: Not Supported 00:27:01.714 Directives: Not Supported 00:27:01.714 NVMe-MI: Not Supported 00:27:01.714 Virtualization Management: Not Supported 00:27:01.714 Doorbell Buffer Config: Not Supported 00:27:01.714 Get LBA Status Capability: Not Supported 00:27:01.714 Command & Feature Lockdown Capability: Not Supported 00:27:01.714 Abort Command Limit: 1 00:27:01.714 Async Event Request Limit: 4 00:27:01.714 Number of Firmware Slots: N/A 00:27:01.714 Firmware Slot 1 Read-Only: N/A 00:27:01.714 Firmware Activation Without Reset: N/A 00:27:01.714 Multiple Update Detection Support: N/A 00:27:01.714 Firmware Update Granularity: No Information Provided 00:27:01.714 Per-Namespace SMART Log: No 00:27:01.714 Asymmetric Namespace Access Log Page: Not Supported 00:27:01.714 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:01.714 Command Effects Log Page: Not Supported 00:27:01.714 Get Log Page Extended Data: Supported 00:27:01.714 Telemetry Log Pages: Not Supported 00:27:01.714 Persistent Event Log Pages: Not Supported 00:27:01.714 Supported Log Pages Log Page: May Support 00:27:01.714 Commands Supported & Effects Log Page: Not Supported 00:27:01.714 Feature Identifiers & Effects Log Page:May Support 00:27:01.714 NVMe-MI Commands & Effects Log Page: May Support 00:27:01.714 Data Area 4 for Telemetry Log: Not Supported 00:27:01.714 Error Log Page Entries Supported: 128 00:27:01.714 Keep Alive: Not Supported 00:27:01.714 00:27:01.714 NVM Command Set Attributes 00:27:01.714 ========================== 00:27:01.714 Submission Queue Entry Size 00:27:01.714 Max: 1 00:27:01.714 Min: 1 00:27:01.714 Completion Queue Entry Size 00:27:01.714 Max: 1 00:27:01.714 Min: 1 00:27:01.714 Number of Namespaces: 0 00:27:01.714 Compare Command: Not Supported 00:27:01.715 Write Uncorrectable Command: Not Supported 00:27:01.715 Dataset Management Command: Not Supported 00:27:01.715 Write Zeroes Command: Not Supported 00:27:01.715 Set Features Save Field: Not Supported 00:27:01.715 Reservations: Not Supported 00:27:01.715 Timestamp: Not Supported 00:27:01.715 Copy: Not Supported 00:27:01.715 Volatile Write Cache: Not Present 00:27:01.715 Atomic Write Unit (Normal): 1 00:27:01.715 Atomic Write Unit (PFail): 1 00:27:01.715 Atomic Compare & Write Unit: 1 00:27:01.715 Fused Compare & Write: Supported 00:27:01.715 Scatter-Gather List 00:27:01.715 SGL Command Set: Supported 00:27:01.715 SGL Keyed: Supported 00:27:01.715 SGL Bit Bucket Descriptor: Not Supported 00:27:01.715 SGL Metadata Pointer: Not Supported 00:27:01.715 Oversized SGL: Not Supported 00:27:01.715 SGL Metadata Address: Not Supported 00:27:01.715 SGL Offset: Supported 00:27:01.715 Transport SGL Data Block: Not Supported 00:27:01.715 Replay Protected Memory Block: Not Supported 00:27:01.715 00:27:01.715 Firmware Slot Information 00:27:01.715 ========================= 00:27:01.715 Active slot: 0 00:27:01.715 00:27:01.715 00:27:01.715 Error Log 00:27:01.715 ========= 00:27:01.715 00:27:01.715 Active Namespaces 00:27:01.715 ================= 00:27:01.715 Discovery Log Page 00:27:01.715 ================== 00:27:01.715 Generation Counter: 2 00:27:01.715 Number of Records: 2 00:27:01.715 Record Format: 0 00:27:01.715 00:27:01.715 Discovery Log Entry 0 00:27:01.715 ---------------------- 00:27:01.715 Transport Type: 3 (TCP) 00:27:01.715 Address Family: 1 (IPv4) 00:27:01.715 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:01.715 Entry Flags: 00:27:01.715 Duplicate Returned Information: 1 00:27:01.715 Explicit Persistent Connection Support for Discovery: 1 00:27:01.715 Transport Requirements: 00:27:01.715 Secure Channel: Not Required 00:27:01.715 Port ID: 0 (0x0000) 00:27:01.715 Controller ID: 65535 (0xffff) 00:27:01.715 Admin Max SQ Size: 128 00:27:01.715 Transport Service Identifier: 4420 00:27:01.715 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:01.715 Transport Address: 10.0.0.2 00:27:01.715 Discovery Log Entry 1 00:27:01.715 ---------------------- 00:27:01.715 Transport Type: 3 (TCP) 00:27:01.715 Address Family: 1 (IPv4) 00:27:01.715 Subsystem Type: 2 (NVM Subsystem) 00:27:01.715 Entry Flags: 00:27:01.715 Duplicate Returned Information: 0 00:27:01.715 Explicit Persistent Connection Support for Discovery: 0 00:27:01.715 Transport Requirements: 00:27:01.715 Secure Channel: Not Required 00:27:01.715 Port ID: 0 (0x0000) 00:27:01.715 Controller ID: 65535 (0xffff) 00:27:01.715 Admin Max SQ Size: 128 00:27:01.715 Transport Service Identifier: 4420 00:27:01.715 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:01.715 Transport Address: 10.0.0.2 [2024-07-15 10:00:18.396213] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:01.715 [2024-07-15 10:00:18.396235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724f80) on tqpair=0x6d6630 00:27:01.715 [2024-07-15 10:00:18.396247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.715 [2024-07-15 10:00:18.396256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725100) on tqpair=0x6d6630 00:27:01.715 [2024-07-15 10:00:18.396263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.715 [2024-07-15 10:00:18.396272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725280) on tqpair=0x6d6630 00:27:01.715 [2024-07-15 10:00:18.396279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.715 [2024-07-15 10:00:18.396287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725400) on tqpair=0x6d6630 00:27:01.715 [2024-07-15 10:00:18.396295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.715 [2024-07-15 10:00:18.396312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.396321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.396328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6630) 00:27:01.715 [2024-07-15 10:00:18.396353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.715 [2024-07-15 10:00:18.396378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725400, cid 3, qid 0 00:27:01.715 [2024-07-15 10:00:18.396545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.715 [2024-07-15 10:00:18.396561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.715 [2024-07-15 10:00:18.396568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.396575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725400) on tqpair=0x6d6630 00:27:01.715 [2024-07-15 10:00:18.396590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.396599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.396605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6630) 00:27:01.715 [2024-07-15 10:00:18.396616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.715 [2024-07-15 10:00:18.396643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725400, cid 3, qid 0 00:27:01.715 [2024-07-15 10:00:18.396778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.715 [2024-07-15 10:00:18.396791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.715 [2024-07-15 10:00:18.396797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.396804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725400) on tqpair=0x6d6630 00:27:01.715 [2024-07-15 10:00:18.396812] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:01.715 [2024-07-15 10:00:18.396820] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:01.715 [2024-07-15 10:00:18.396835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.396845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.396851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6630) 00:27:01.715 [2024-07-15 10:00:18.396861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.715 [2024-07-15 10:00:18.396888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725400, cid 3, qid 0 00:27:01.715 [2024-07-15 10:00:18.397009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.715 [2024-07-15 10:00:18.397024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.715 [2024-07-15 10:00:18.397031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725400) on tqpair=0x6d6630 00:27:01.715 [2024-07-15 10:00:18.397055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6630) 00:27:01.715 [2024-07-15 10:00:18.397082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.715 [2024-07-15 10:00:18.397102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725400, cid 3, qid 0 00:27:01.715 [2024-07-15 10:00:18.397268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.715 [2024-07-15 10:00:18.397283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.715 [2024-07-15 10:00:18.397290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725400) on tqpair=0x6d6630 00:27:01.715 [2024-07-15 10:00:18.397313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6630) 00:27:01.715 [2024-07-15 10:00:18.397340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.715 [2024-07-15 10:00:18.397360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725400, cid 3, qid 0 00:27:01.715 [2024-07-15 10:00:18.397521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.715 [2024-07-15 10:00:18.397534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.715 [2024-07-15 10:00:18.397541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725400) on tqpair=0x6d6630 00:27:01.715 [2024-07-15 10:00:18.397568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397584] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6630) 00:27:01.715 [2024-07-15 10:00:18.397594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.715 [2024-07-15 10:00:18.397615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725400, cid 3, qid 0 00:27:01.715 [2024-07-15 10:00:18.397725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.715 [2024-07-15 10:00:18.397737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.715 [2024-07-15 10:00:18.397744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725400) on tqpair=0x6d6630 00:27:01.715 [2024-07-15 10:00:18.397766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.715 [2024-07-15 10:00:18.397782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6630) 00:27:01.715 [2024-07-15 10:00:18.397792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.715 [2024-07-15 10:00:18.397813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725400, cid 3, qid 0 00:27:01.716 [2024-07-15 10:00:18.401901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.716 [2024-07-15 10:00:18.401918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.716 [2024-07-15 10:00:18.401925] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.401931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725400) on tqpair=0x6d6630 00:27:01.716 [2024-07-15 10:00:18.401948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.401974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.401980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6630) 00:27:01.716 [2024-07-15 10:00:18.401991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.716 [2024-07-15 10:00:18.402014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x725400, cid 3, qid 0 00:27:01.716 [2024-07-15 10:00:18.402179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.716 [2024-07-15 10:00:18.402195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.716 [2024-07-15 10:00:18.402202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.402209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x725400) on tqpair=0x6d6630 00:27:01.716 [2024-07-15 10:00:18.402222] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:27:01.716 00:27:01.716 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:01.716 [2024-07-15 10:00:18.436726] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:01.716 [2024-07-15 10:00:18.436770] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994369 ] 00:27:01.716 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.716 [2024-07-15 10:00:18.455171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:01.716 [2024-07-15 10:00:18.472656] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:01.716 [2024-07-15 10:00:18.472701] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:01.716 [2024-07-15 10:00:18.472710] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:01.716 [2024-07-15 10:00:18.472723] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:01.716 [2024-07-15 10:00:18.472732] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:01.716 [2024-07-15 10:00:18.472941] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:01.716 [2024-07-15 10:00:18.472980] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x121c630 0 00:27:01.716 [2024-07-15 10:00:18.479902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:01.716 [2024-07-15 10:00:18.479921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:01.716 [2024-07-15 10:00:18.479928] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:01.716 [2024-07-15 10:00:18.479934] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:01.716 [2024-07-15 10:00:18.479971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.479982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.479989] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121c630) 00:27:01.716 [2024-07-15 10:00:18.480002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:01.716 [2024-07-15 10:00:18.480028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126af80, cid 0, qid 0 00:27:01.716 [2024-07-15 10:00:18.487898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.716 [2024-07-15 10:00:18.487931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.716 [2024-07-15 10:00:18.487939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.487946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126af80) on tqpair=0x121c630 00:27:01.716 [2024-07-15 10:00:18.487965] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:01.716 [2024-07-15 10:00:18.487976] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:01.716 [2024-07-15 10:00:18.487985] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:01.716 [2024-07-15 10:00:18.488003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488019] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121c630) 00:27:01.716 [2024-07-15 10:00:18.488030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.716 [2024-07-15 10:00:18.488055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126af80, cid 0, qid 0 00:27:01.716 [2024-07-15 10:00:18.488215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.716 [2024-07-15 10:00:18.488228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.716 [2024-07-15 10:00:18.488235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488242] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126af80) on tqpair=0x121c630 00:27:01.716 [2024-07-15 10:00:18.488250] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:01.716 [2024-07-15 10:00:18.488267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:01.716 [2024-07-15 10:00:18.488280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121c630) 00:27:01.716 [2024-07-15 10:00:18.488304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.716 [2024-07-15 10:00:18.488325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126af80, cid 0, qid 0 00:27:01.716 [2024-07-15 10:00:18.488453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.716 [2024-07-15 10:00:18.488465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.716 [2024-07-15 10:00:18.488472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126af80) on tqpair=0x121c630 00:27:01.716 [2024-07-15 10:00:18.488487] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:01.716 [2024-07-15 10:00:18.488500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:01.716 [2024-07-15 10:00:18.488512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121c630) 00:27:01.716 [2024-07-15 10:00:18.488536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.716 [2024-07-15 10:00:18.488557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126af80, cid 0, qid 0 00:27:01.716 [2024-07-15 10:00:18.488684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.716 [2024-07-15 10:00:18.488696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.716 [2024-07-15 10:00:18.488703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126af80) on tqpair=0x121c630 00:27:01.716 [2024-07-15 10:00:18.488720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:01.716 [2024-07-15 10:00:18.488736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121c630) 00:27:01.716 [2024-07-15 10:00:18.488762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.716 [2024-07-15 10:00:18.488782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126af80, cid 0, qid 0 00:27:01.716 [2024-07-15 10:00:18.488917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.716 [2024-07-15 10:00:18.488932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.716 [2024-07-15 10:00:18.488940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.488947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126af80) on tqpair=0x121c630 00:27:01.716 [2024-07-15 10:00:18.488954] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:01.716 [2024-07-15 10:00:18.488962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:01.716 [2024-07-15 10:00:18.488976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:01.716 [2024-07-15 10:00:18.489089] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:01.716 [2024-07-15 10:00:18.489097] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:01.716 [2024-07-15 10:00:18.489109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.489117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.489123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121c630) 00:27:01.716 [2024-07-15 10:00:18.489133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.716 [2024-07-15 10:00:18.489155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126af80, cid 0, qid 0 00:27:01.716 [2024-07-15 10:00:18.489306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.716 [2024-07-15 10:00:18.489318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.716 [2024-07-15 10:00:18.489325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.716 [2024-07-15 10:00:18.489332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126af80) on tqpair=0x121c630 00:27:01.717 [2024-07-15 10:00:18.489340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:01.717 [2024-07-15 10:00:18.489356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.717 [2024-07-15 10:00:18.489365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.717 [2024-07-15 10:00:18.489371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121c630) 00:27:01.717 [2024-07-15 10:00:18.489381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.717 [2024-07-15 10:00:18.489401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126af80, cid 0, qid 0 00:27:01.717 [2024-07-15 10:00:18.489532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.717 [2024-07-15 10:00:18.489546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.717 [2024-07-15 10:00:18.489553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.717 [2024-07-15 10:00:18.489559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126af80) on tqpair=0x121c630 00:27:01.717 [2024-07-15 10:00:18.489567] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:01.717 [2024-07-15 10:00:18.489575] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:01.717 [2024-07-15 10:00:18.489588] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:01.717 [2024-07-15 10:00:18.489602] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:01.717 [2024-07-15 10:00:18.489616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.717 [2024-07-15 10:00:18.489624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121c630) 00:27:01.717 [2024-07-15 10:00:18.489634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.717 [2024-07-15 10:00:18.489655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126af80, cid 0, qid 0 00:27:01.717 [2024-07-15 10:00:18.489831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.717 [2024-07-15 10:00:18.489846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.717 [2024-07-15 10:00:18.489853] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.717 [2024-07-15 10:00:18.489863] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121c630): datao=0, datal=4096, cccid=0 00:27:01.717 [2024-07-15 10:00:18.489872] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126af80) on tqpair(0x121c630): expected_datao=0, payload_size=4096 00:27:01.717 [2024-07-15 10:00:18.489887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.717 [2024-07-15 10:00:18.489905] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.717 [2024-07-15 10:00:18.489914] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.534889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.980 [2024-07-15 10:00:18.534908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.980 [2024-07-15 10:00:18.534915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.534937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126af80) on tqpair=0x121c630 00:27:01.980 [2024-07-15 10:00:18.534949] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:01.980 [2024-07-15 10:00:18.534962] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:01.980 [2024-07-15 10:00:18.534971] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:01.980 [2024-07-15 10:00:18.534977] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:01.980 [2024-07-15 10:00:18.534985] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:01.980 [2024-07-15 10:00:18.534993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.535008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.535021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.535047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:01.980 [2024-07-15 10:00:18.535070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126af80, cid 0, qid 0 00:27:01.980 [2024-07-15 10:00:18.535201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.980 [2024-07-15 10:00:18.535214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.980 [2024-07-15 10:00:18.535221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126af80) on tqpair=0x121c630 00:27:01.980 [2024-07-15 10:00:18.535238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.535262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.980 [2024-07-15 10:00:18.535272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.535294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.980 [2024-07-15 10:00:18.535303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.535330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.980 [2024-07-15 10:00:18.535340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.535377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.980 [2024-07-15 10:00:18.535386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.535403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.535416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.535448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.980 [2024-07-15 10:00:18.535469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126af80, cid 0, qid 0 00:27:01.980 [2024-07-15 10:00:18.535480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b100, cid 1, qid 0 00:27:01.980 [2024-07-15 10:00:18.535502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b280, cid 2, qid 0 00:27:01.980 [2024-07-15 10:00:18.535510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.980 [2024-07-15 10:00:18.535518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b580, cid 4, qid 0 00:27:01.980 [2024-07-15 10:00:18.535702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.980 [2024-07-15 10:00:18.535715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.980 [2024-07-15 10:00:18.535722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b580) on tqpair=0x121c630 00:27:01.980 [2024-07-15 10:00:18.535737] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:01.980 [2024-07-15 10:00:18.535746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.535759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.535770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.535796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.535810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.535820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:01.980 [2024-07-15 10:00:18.535840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b580, cid 4, qid 0 00:27:01.980 [2024-07-15 10:00:18.536009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.980 [2024-07-15 10:00:18.536026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.980 [2024-07-15 10:00:18.536033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b580) on tqpair=0x121c630 00:27:01.980 [2024-07-15 10:00:18.536110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.536128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.536159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.536177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.980 [2024-07-15 10:00:18.536199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b580, cid 4, qid 0 00:27:01.980 [2024-07-15 10:00:18.536384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.980 [2024-07-15 10:00:18.536400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.980 [2024-07-15 10:00:18.536407] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536414] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121c630): datao=0, datal=4096, cccid=4 00:27:01.980 [2024-07-15 10:00:18.536422] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126b580) on tqpair(0x121c630): expected_datao=0, payload_size=4096 00:27:01.980 [2024-07-15 10:00:18.536429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536440] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536447] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.980 [2024-07-15 10:00:18.536469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.980 [2024-07-15 10:00:18.536476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b580) on tqpair=0x121c630 00:27:01.980 [2024-07-15 10:00:18.536517] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:01.980 [2024-07-15 10:00:18.536534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.536551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.536564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.536582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.980 [2024-07-15 10:00:18.536602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b580, cid 4, qid 0 00:27:01.980 [2024-07-15 10:00:18.536772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.980 [2024-07-15 10:00:18.536788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.980 [2024-07-15 10:00:18.536795] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536801] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121c630): datao=0, datal=4096, cccid=4 00:27:01.980 [2024-07-15 10:00:18.536809] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126b580) on tqpair(0x121c630): expected_datao=0, payload_size=4096 00:27:01.980 [2024-07-15 10:00:18.536816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536826] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536834] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.980 [2024-07-15 10:00:18.536859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.980 [2024-07-15 10:00:18.536866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b580) on tqpair=0x121c630 00:27:01.980 [2024-07-15 10:00:18.536901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.536920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.536934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.536941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.536952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.980 [2024-07-15 10:00:18.536973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b580, cid 4, qid 0 00:27:01.980 [2024-07-15 10:00:18.537152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.980 [2024-07-15 10:00:18.537167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.980 [2024-07-15 10:00:18.537174] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.537181] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121c630): datao=0, datal=4096, cccid=4 00:27:01.980 [2024-07-15 10:00:18.537189] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126b580) on tqpair(0x121c630): expected_datao=0, payload_size=4096 00:27:01.980 [2024-07-15 10:00:18.537196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.537206] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.537214] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.537225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.980 [2024-07-15 10:00:18.537235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.980 [2024-07-15 10:00:18.537241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.537248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b580) on tqpair=0x121c630 00:27:01.980 [2024-07-15 10:00:18.537260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.537275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.537292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.537303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.537311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.537320] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.537328] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:01.980 [2024-07-15 10:00:18.537336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:01.980 [2024-07-15 10:00:18.537345] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:01.980 [2024-07-15 10:00:18.537367] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.537377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.537387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.980 [2024-07-15 10:00:18.537399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.537406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.537412] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121c630) 00:27:01.980 [2024-07-15 10:00:18.537421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.980 [2024-07-15 10:00:18.537460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b580, cid 4, qid 0 00:27:01.980 [2024-07-15 10:00:18.537471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b700, cid 5, qid 0 00:27:01.980 [2024-07-15 10:00:18.537665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.980 [2024-07-15 10:00:18.537678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.980 [2024-07-15 10:00:18.537685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.980 [2024-07-15 10:00:18.537692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b580) on tqpair=0x121c630 00:27:01.980 [2024-07-15 10:00:18.537703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.981 [2024-07-15 10:00:18.537712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.981 [2024-07-15 10:00:18.537719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.537725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b700) on tqpair=0x121c630 00:27:01.981 [2024-07-15 10:00:18.537741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.537749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121c630) 00:27:01.981 [2024-07-15 10:00:18.537760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.981 [2024-07-15 10:00:18.537780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b700, cid 5, qid 0 00:27:01.981 [2024-07-15 10:00:18.537944] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.981 [2024-07-15 10:00:18.537959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.981 [2024-07-15 10:00:18.537966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.537972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b700) on tqpair=0x121c630 00:27:01.981 [2024-07-15 10:00:18.537988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.537997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121c630) 00:27:01.981 [2024-07-15 10:00:18.538007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.981 [2024-07-15 10:00:18.538028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b700, cid 5, qid 0 00:27:01.981 [2024-07-15 10:00:18.538149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.981 [2024-07-15 10:00:18.538164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.981 [2024-07-15 10:00:18.538171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.538177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b700) on tqpair=0x121c630 00:27:01.981 [2024-07-15 10:00:18.538194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.538203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121c630) 00:27:01.981 [2024-07-15 10:00:18.538213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.981 [2024-07-15 10:00:18.538237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b700, cid 5, qid 0 00:27:01.981 [2024-07-15 10:00:18.538347] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.981 [2024-07-15 10:00:18.538360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.981 [2024-07-15 10:00:18.538367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.538373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b700) on tqpair=0x121c630 00:27:01.981 [2024-07-15 10:00:18.538396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.538407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121c630) 00:27:01.981 [2024-07-15 10:00:18.538417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.981 [2024-07-15 10:00:18.538430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.538437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121c630) 00:27:01.981 [2024-07-15 10:00:18.538446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.981 [2024-07-15 10:00:18.538458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.538465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x121c630) 00:27:01.981 [2024-07-15 10:00:18.538474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.981 [2024-07-15 10:00:18.538485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.538493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x121c630) 00:27:01.981 [2024-07-15 10:00:18.538502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.981 [2024-07-15 10:00:18.538537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b700, cid 5, qid 0 00:27:01.981 [2024-07-15 10:00:18.538547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b580, cid 4, qid 0 00:27:01.981 [2024-07-15 10:00:18.538555] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b880, cid 6, qid 0 00:27:01.981 [2024-07-15 10:00:18.538562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126ba00, cid 7, qid 0 00:27:01.981 [2024-07-15 10:00:18.538855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.981 [2024-07-15 10:00:18.538873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.981 [2024-07-15 10:00:18.542908] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.542916] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121c630): datao=0, datal=8192, cccid=5 00:27:01.981 [2024-07-15 10:00:18.542924] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126b700) on tqpair(0x121c630): expected_datao=0, payload_size=8192 00:27:01.981 [2024-07-15 10:00:18.542931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.542951] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.542960] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.542973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.981 [2024-07-15 10:00:18.542983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.981 [2024-07-15 10:00:18.542989] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.542995] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121c630): datao=0, datal=512, cccid=4 00:27:01.981 [2024-07-15 10:00:18.543002] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126b580) on tqpair(0x121c630): expected_datao=0, payload_size=512 00:27:01.981 [2024-07-15 10:00:18.543013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543023] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543030] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.981 [2024-07-15 10:00:18.543046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.981 [2024-07-15 10:00:18.543053] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543059] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121c630): datao=0, datal=512, cccid=6 00:27:01.981 [2024-07-15 10:00:18.543066] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126b880) on tqpair(0x121c630): expected_datao=0, payload_size=512 00:27:01.981 [2024-07-15 10:00:18.543073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543082] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543088] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.981 [2024-07-15 10:00:18.543105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.981 [2024-07-15 10:00:18.543111] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543117] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121c630): datao=0, datal=4096, cccid=7 00:27:01.981 [2024-07-15 10:00:18.543124] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126ba00) on tqpair(0x121c630): expected_datao=0, payload_size=4096 00:27:01.981 [2024-07-15 10:00:18.543131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543140] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543147] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.981 [2024-07-15 10:00:18.543164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.981 [2024-07-15 10:00:18.543170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b700) on tqpair=0x121c630 00:27:01.981 [2024-07-15 10:00:18.543210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.981 [2024-07-15 10:00:18.543220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.981 [2024-07-15 10:00:18.543227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b580) on tqpair=0x121c630 00:27:01.981 [2024-07-15 10:00:18.543247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.981 [2024-07-15 10:00:18.543256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.981 [2024-07-15 10:00:18.543262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b880) on tqpair=0x121c630 00:27:01.981 [2024-07-15 10:00:18.543278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.981 [2024-07-15 10:00:18.543287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.981 [2024-07-15 10:00:18.543293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.981 [2024-07-15 10:00:18.543299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126ba00) on tqpair=0x121c630 00:27:01.981 ===================================================== 00:27:01.981 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.981 ===================================================== 00:27:01.981 Controller Capabilities/Features 00:27:01.981 ================================ 00:27:01.981 Vendor ID: 8086 00:27:01.981 Subsystem Vendor ID: 8086 00:27:01.981 Serial Number: SPDK00000000000001 00:27:01.981 Model Number: SPDK bdev Controller 00:27:01.981 Firmware Version: 24.09 00:27:01.981 Recommended Arb Burst: 6 00:27:01.981 IEEE OUI Identifier: e4 d2 5c 00:27:01.981 Multi-path I/O 00:27:01.981 May have multiple subsystem ports: Yes 00:27:01.981 May have multiple controllers: Yes 00:27:01.981 Associated with SR-IOV VF: No 00:27:01.981 Max Data Transfer Size: 131072 00:27:01.981 Max Number of Namespaces: 32 00:27:01.981 Max Number of I/O Queues: 127 00:27:01.981 NVMe Specification Version (VS): 1.3 00:27:01.981 NVMe Specification Version (Identify): 1.3 00:27:01.981 Maximum Queue Entries: 128 00:27:01.981 Contiguous Queues Required: Yes 00:27:01.981 Arbitration Mechanisms Supported 00:27:01.981 Weighted Round Robin: Not Supported 00:27:01.981 Vendor Specific: Not Supported 00:27:01.981 Reset Timeout: 15000 ms 00:27:01.981 Doorbell Stride: 4 bytes 00:27:01.981 NVM Subsystem Reset: Not Supported 00:27:01.981 Command Sets Supported 00:27:01.981 NVM Command Set: Supported 00:27:01.981 Boot Partition: Not Supported 00:27:01.981 Memory Page Size Minimum: 4096 bytes 00:27:01.981 Memory Page Size Maximum: 4096 bytes 00:27:01.981 Persistent Memory Region: Not Supported 00:27:01.981 Optional Asynchronous Events Supported 00:27:01.981 Namespace Attribute Notices: Supported 00:27:01.981 Firmware Activation Notices: Not Supported 00:27:01.981 ANA Change Notices: Not Supported 00:27:01.981 PLE Aggregate Log Change Notices: Not Supported 00:27:01.981 LBA Status Info Alert Notices: Not Supported 00:27:01.981 EGE Aggregate Log Change Notices: Not Supported 00:27:01.981 Normal NVM Subsystem Shutdown event: Not Supported 00:27:01.981 Zone Descriptor Change Notices: Not Supported 00:27:01.981 Discovery Log Change Notices: Not Supported 00:27:01.981 Controller Attributes 00:27:01.981 128-bit Host Identifier: Supported 00:27:01.981 Non-Operational Permissive Mode: Not Supported 00:27:01.981 NVM Sets: Not Supported 00:27:01.981 Read Recovery Levels: Not Supported 00:27:01.981 Endurance Groups: Not Supported 00:27:01.981 Predictable Latency Mode: Not Supported 00:27:01.981 Traffic Based Keep ALive: Not Supported 00:27:01.981 Namespace Granularity: Not Supported 00:27:01.981 SQ Associations: Not Supported 00:27:01.981 UUID List: Not Supported 00:27:01.981 Multi-Domain Subsystem: Not Supported 00:27:01.981 Fixed Capacity Management: Not Supported 00:27:01.981 Variable Capacity Management: Not Supported 00:27:01.981 Delete Endurance Group: Not Supported 00:27:01.981 Delete NVM Set: Not Supported 00:27:01.981 Extended LBA Formats Supported: Not Supported 00:27:01.981 Flexible Data Placement Supported: Not Supported 00:27:01.981 00:27:01.981 Controller Memory Buffer Support 00:27:01.981 ================================ 00:27:01.981 Supported: No 00:27:01.981 00:27:01.981 Persistent Memory Region Support 00:27:01.981 ================================ 00:27:01.981 Supported: No 00:27:01.981 00:27:01.981 Admin Command Set Attributes 00:27:01.981 ============================ 00:27:01.981 Security Send/Receive: Not Supported 00:27:01.981 Format NVM: Not Supported 00:27:01.981 Firmware Activate/Download: Not Supported 00:27:01.981 Namespace Management: Not Supported 00:27:01.981 Device Self-Test: Not Supported 00:27:01.981 Directives: Not Supported 00:27:01.981 NVMe-MI: Not Supported 00:27:01.981 Virtualization Management: Not Supported 00:27:01.981 Doorbell Buffer Config: Not Supported 00:27:01.981 Get LBA Status Capability: Not Supported 00:27:01.981 Command & Feature Lockdown Capability: Not Supported 00:27:01.981 Abort Command Limit: 4 00:27:01.981 Async Event Request Limit: 4 00:27:01.981 Number of Firmware Slots: N/A 00:27:01.981 Firmware Slot 1 Read-Only: N/A 00:27:01.981 Firmware Activation Without Reset: N/A 00:27:01.981 Multiple Update Detection Support: N/A 00:27:01.981 Firmware Update Granularity: No Information Provided 00:27:01.981 Per-Namespace SMART Log: No 00:27:01.981 Asymmetric Namespace Access Log Page: Not Supported 00:27:01.981 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:01.981 Command Effects Log Page: Supported 00:27:01.981 Get Log Page Extended Data: Supported 00:27:01.981 Telemetry Log Pages: Not Supported 00:27:01.981 Persistent Event Log Pages: Not Supported 00:27:01.981 Supported Log Pages Log Page: May Support 00:27:01.981 Commands Supported & Effects Log Page: Not Supported 00:27:01.981 Feature Identifiers & Effects Log Page:May Support 00:27:01.981 NVMe-MI Commands & Effects Log Page: May Support 00:27:01.981 Data Area 4 for Telemetry Log: Not Supported 00:27:01.981 Error Log Page Entries Supported: 128 00:27:01.981 Keep Alive: Supported 00:27:01.981 Keep Alive Granularity: 10000 ms 00:27:01.981 00:27:01.981 NVM Command Set Attributes 00:27:01.981 ========================== 00:27:01.981 Submission Queue Entry Size 00:27:01.981 Max: 64 00:27:01.981 Min: 64 00:27:01.981 Completion Queue Entry Size 00:27:01.981 Max: 16 00:27:01.981 Min: 16 00:27:01.981 Number of Namespaces: 32 00:27:01.981 Compare Command: Supported 00:27:01.981 Write Uncorrectable Command: Not Supported 00:27:01.981 Dataset Management Command: Supported 00:27:01.981 Write Zeroes Command: Supported 00:27:01.981 Set Features Save Field: Not Supported 00:27:01.981 Reservations: Supported 00:27:01.981 Timestamp: Not Supported 00:27:01.981 Copy: Supported 00:27:01.981 Volatile Write Cache: Present 00:27:01.981 Atomic Write Unit (Normal): 1 00:27:01.981 Atomic Write Unit (PFail): 1 00:27:01.981 Atomic Compare & Write Unit: 1 00:27:01.981 Fused Compare & Write: Supported 00:27:01.981 Scatter-Gather List 00:27:01.981 SGL Command Set: Supported 00:27:01.981 SGL Keyed: Supported 00:27:01.981 SGL Bit Bucket Descriptor: Not Supported 00:27:01.981 SGL Metadata Pointer: Not Supported 00:27:01.981 Oversized SGL: Not Supported 00:27:01.981 SGL Metadata Address: Not Supported 00:27:01.981 SGL Offset: Supported 00:27:01.981 Transport SGL Data Block: Not Supported 00:27:01.981 Replay Protected Memory Block: Not Supported 00:27:01.981 00:27:01.981 Firmware Slot Information 00:27:01.981 ========================= 00:27:01.981 Active slot: 1 00:27:01.981 Slot 1 Firmware Revision: 24.09 00:27:01.981 00:27:01.981 00:27:01.981 Commands Supported and Effects 00:27:01.981 ============================== 00:27:01.981 Admin Commands 00:27:01.981 -------------- 00:27:01.981 Get Log Page (02h): Supported 00:27:01.981 Identify (06h): Supported 00:27:01.981 Abort (08h): Supported 00:27:01.981 Set Features (09h): Supported 00:27:01.981 Get Features (0Ah): Supported 00:27:01.981 Asynchronous Event Request (0Ch): Supported 00:27:01.981 Keep Alive (18h): Supported 00:27:01.981 I/O Commands 00:27:01.981 ------------ 00:27:01.981 Flush (00h): Supported LBA-Change 00:27:01.981 Write (01h): Supported LBA-Change 00:27:01.981 Read (02h): Supported 00:27:01.981 Compare (05h): Supported 00:27:01.981 Write Zeroes (08h): Supported LBA-Change 00:27:01.981 Dataset Management (09h): Supported LBA-Change 00:27:01.981 Copy (19h): Supported LBA-Change 00:27:01.981 00:27:01.981 Error Log 00:27:01.981 ========= 00:27:01.981 00:27:01.981 Arbitration 00:27:01.981 =========== 00:27:01.981 Arbitration Burst: 1 00:27:01.981 00:27:01.981 Power Management 00:27:01.981 ================ 00:27:01.981 Number of Power States: 1 00:27:01.981 Current Power State: Power State #0 00:27:01.981 Power State #0: 00:27:01.981 Max Power: 0.00 W 00:27:01.981 Non-Operational State: Operational 00:27:01.981 Entry Latency: Not Reported 00:27:01.981 Exit Latency: Not Reported 00:27:01.981 Relative Read Throughput: 0 00:27:01.981 Relative Read Latency: 0 00:27:01.981 Relative Write Throughput: 0 00:27:01.981 Relative Write Latency: 0 00:27:01.981 Idle Power: Not Reported 00:27:01.981 Active Power: Not Reported 00:27:01.981 Non-Operational Permissive Mode: Not Supported 00:27:01.981 00:27:01.981 Health Information 00:27:01.981 ================== 00:27:01.982 Critical Warnings: 00:27:01.982 Available Spare Space: OK 00:27:01.982 Temperature: OK 00:27:01.982 Device Reliability: OK 00:27:01.982 Read Only: No 00:27:01.982 Volatile Memory Backup: OK 00:27:01.982 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:01.982 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:01.982 Available Spare: 0% 00:27:01.982 Available Spare Threshold: 0% 00:27:01.982 Life Percentage Used:[2024-07-15 10:00:18.543410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.543421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.543432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.543457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126ba00, cid 7, qid 0 00:27:01.982 [2024-07-15 10:00:18.543738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.543751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.543758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.543765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126ba00) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.543815] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:01.982 [2024-07-15 10:00:18.543849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126af80) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.543860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.982 [2024-07-15 10:00:18.543868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b100) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.543883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.982 [2024-07-15 10:00:18.543908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b280) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.543916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.982 [2024-07-15 10:00:18.543924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.543932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.982 [2024-07-15 10:00:18.543944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.543952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.543958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.543969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.543992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.544156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.544169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.544176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.544193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.544217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.544243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.544381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.544397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.544403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.544418] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:01.982 [2024-07-15 10:00:18.544425] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:01.982 [2024-07-15 10:00:18.544445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.544471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.544492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.544611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.544626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.544633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.544656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.544682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.544703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.544816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.544828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.544835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.544858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.544873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.544893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.544914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.545075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.545090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.545097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.545121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.545147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.545167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.545329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.545341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.545348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.545370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.545401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.545421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.545550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.545565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.545572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.545595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.545621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.545641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.545784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.545799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.545806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.545829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.545845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.545856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.545882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.545997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.546009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.546016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.546038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.546064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.546085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.546248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.546263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.546270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.546293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.546323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.546344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.546563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.546578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.546585] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.546609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.546635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.546656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.546820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.546832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.546839] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.546861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.546870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.550895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121c630) 00:27:01.982 [2024-07-15 10:00:18.550911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.982 [2024-07-15 10:00:18.550950] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126b400, cid 3, qid 0 00:27:01.982 [2024-07-15 10:00:18.551119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.982 [2024-07-15 10:00:18.551135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.982 [2024-07-15 10:00:18.551142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.982 [2024-07-15 10:00:18.551148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126b400) on tqpair=0x121c630 00:27:01.982 [2024-07-15 10:00:18.551162] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:27:01.982 0% 00:27:01.982 Data Units Read: 0 00:27:01.982 Data Units Written: 0 00:27:01.982 Host Read Commands: 0 00:27:01.982 Host Write Commands: 0 00:27:01.982 Controller Busy Time: 0 minutes 00:27:01.982 Power Cycles: 0 00:27:01.982 Power On Hours: 0 hours 00:27:01.982 Unsafe Shutdowns: 0 00:27:01.982 Unrecoverable Media Errors: 0 00:27:01.982 Lifetime Error Log Entries: 0 00:27:01.982 Warning Temperature Time: 0 minutes 00:27:01.982 Critical Temperature Time: 0 minutes 00:27:01.982 00:27:01.982 Number of Queues 00:27:01.982 ================ 00:27:01.982 Number of I/O Submission Queues: 127 00:27:01.982 Number of I/O Completion Queues: 127 00:27:01.982 00:27:01.982 Active Namespaces 00:27:01.982 ================= 00:27:01.982 Namespace ID:1 00:27:01.982 Error Recovery Timeout: Unlimited 00:27:01.982 Command Set Identifier: NVM (00h) 00:27:01.982 Deallocate: Supported 00:27:01.982 Deallocated/Unwritten Error: Not Supported 00:27:01.982 Deallocated Read Value: Unknown 00:27:01.982 Deallocate in Write Zeroes: Not Supported 00:27:01.982 Deallocated Guard Field: 0xFFFF 00:27:01.982 Flush: Supported 00:27:01.982 Reservation: Supported 00:27:01.982 Namespace Sharing Capabilities: Multiple Controllers 00:27:01.982 Size (in LBAs): 131072 (0GiB) 00:27:01.982 Capacity (in LBAs): 131072 (0GiB) 00:27:01.982 Utilization (in LBAs): 131072 (0GiB) 00:27:01.982 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:01.982 EUI64: ABCDEF0123456789 00:27:01.982 UUID: 31528aef-0c69-4dd2-8f44-9c5ff4375656 00:27:01.982 Thin Provisioning: Not Supported 00:27:01.982 Per-NS Atomic Units: Yes 00:27:01.982 Atomic Boundary Size (Normal): 0 00:27:01.982 Atomic Boundary Size (PFail): 0 00:27:01.982 Atomic Boundary Offset: 0 00:27:01.982 Maximum Single Source Range Length: 65535 00:27:01.982 Maximum Copy Length: 65535 00:27:01.982 Maximum Source Range Count: 1 00:27:01.982 NGUID/EUI64 Never Reused: No 00:27:01.982 Namespace Write Protected: No 00:27:01.982 Number of LBA Formats: 1 00:27:01.982 Current LBA Format: LBA Format #00 00:27:01.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:01.982 00:27:01.982 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:01.982 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.982 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.983 rmmod nvme_tcp 00:27:01.983 rmmod nvme_fabrics 00:27:01.983 rmmod nvme_keyring 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1994286 ']' 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1994286 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1994286 ']' 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1994286 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1994286 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1994286' 00:27:01.983 killing process with pid 1994286 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1994286 00:27:01.983 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1994286 00:27:02.241 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.241 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.241 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.241 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.241 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.241 10:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.241 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.241 10:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.774 10:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.774 00:27:04.774 real 0m5.322s 00:27:04.774 user 0m4.255s 00:27:04.774 sys 0m1.795s 00:27:04.774 10:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:04.774 10:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:04.774 ************************************ 00:27:04.774 END TEST nvmf_identify 00:27:04.774 ************************************ 00:27:04.774 10:00:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:04.774 10:00:20 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:04.774 10:00:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:04.774 10:00:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.774 10:00:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.774 ************************************ 00:27:04.774 START TEST nvmf_perf 00:27:04.774 ************************************ 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:04.774 * Looking for test storage... 00:27:04.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.774 10:00:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:06.677 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:06.677 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:06.677 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:06.677 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.677 10:00:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:06.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:27:06.677 00:27:06.677 --- 10.0.0.2 ping statistics --- 00:27:06.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.677 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:27:06.677 00:27:06.677 --- 10.0.0.1 ping statistics --- 00:27:06.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.677 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:06.677 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1996288 00:27:06.678 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:06.678 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1996288 00:27:06.678 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1996288 ']' 00:27:06.678 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.678 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.678 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.678 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.678 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 [2024-07-15 10:00:23.181375] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:06.678 [2024-07-15 10:00:23.181456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.678 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.678 [2024-07-15 10:00:23.225538] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:06.678 [2024-07-15 10:00:23.256150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.678 [2024-07-15 10:00:23.348799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.678 [2024-07-15 10:00:23.348862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.678 [2024-07-15 10:00:23.348886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.678 [2024-07-15 10:00:23.348910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.678 [2024-07-15 10:00:23.348924] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.678 [2024-07-15 10:00:23.348996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.678 [2024-07-15 10:00:23.349053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.678 [2024-07-15 10:00:23.349170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.678 [2024-07-15 10:00:23.349172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.935 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:06.935 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:27:06.935 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:06.935 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:06.935 10:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:06.935 10:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.935 10:00:23 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:06.935 10:00:23 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:10.214 10:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:10.214 10:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:10.214 10:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:10.214 10:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:10.471 10:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:10.471 10:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:10.471 10:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:10.471 10:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:10.471 10:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:10.738 [2024-07-15 10:00:27.347604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.738 10:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:11.000 10:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:11.000 10:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:11.257 10:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:11.257 10:00:27 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:11.516 10:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.773 [2024-07-15 10:00:28.331225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.773 10:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:12.032 10:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:12.032 10:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:12.032 10:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:12.032 10:00:28 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:13.409 Initializing NVMe Controllers 00:27:13.409 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:27:13.409 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:27:13.409 Initialization complete. Launching workers. 00:27:13.409 ======================================================== 00:27:13.409 Latency(us) 00:27:13.409 Device Information : IOPS MiB/s Average min max 00:27:13.409 PCIE (0000:88:00.0) NSID 1 from core 0: 85656.83 334.60 373.17 37.49 7291.66 00:27:13.409 ======================================================== 00:27:13.409 Total : 85656.83 334.60 373.17 37.49 7291.66 00:27:13.409 00:27:13.409 10:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:13.409 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.342 Initializing NVMe Controllers 00:27:14.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:14.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:14.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:14.342 Initialization complete. Launching workers. 00:27:14.342 ======================================================== 00:27:14.342 Latency(us) 00:27:14.342 Device Information : IOPS MiB/s Average min max 00:27:14.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 90.86 0.35 11007.83 185.33 45803.39 00:27:14.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.92 0.19 21415.48 7929.55 51875.74 00:27:14.342 ======================================================== 00:27:14.342 Total : 139.78 0.55 14650.51 185.33 51875.74 00:27:14.342 00:27:14.342 10:00:31 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:14.598 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.531 Initializing NVMe Controllers 00:27:15.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:15.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:15.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:15.532 Initialization complete. Launching workers. 00:27:15.532 ======================================================== 00:27:15.532 Latency(us) 00:27:15.532 Device Information : IOPS MiB/s Average min max 00:27:15.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8300.10 32.42 3857.62 402.65 8304.76 00:27:15.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3848.31 15.03 8316.20 5378.11 15825.02 00:27:15.532 ======================================================== 00:27:15.532 Total : 12148.41 47.45 5269.99 402.65 15825.02 00:27:15.532 00:27:15.789 10:00:32 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:15.789 10:00:32 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:15.789 10:00:32 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:15.789 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.318 Initializing NVMe Controllers 00:27:18.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.318 Controller IO queue size 128, less than required. 00:27:18.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:18.318 Controller IO queue size 128, less than required. 00:27:18.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:18.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:18.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:18.318 Initialization complete. Launching workers. 00:27:18.318 ======================================================== 00:27:18.318 Latency(us) 00:27:18.318 Device Information : IOPS MiB/s Average min max 00:27:18.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 966.19 241.55 137269.15 88314.43 190492.47 00:27:18.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.07 150.02 223890.54 85470.03 366196.94 00:27:18.318 ======================================================== 00:27:18.318 Total : 1566.26 391.56 170455.62 85470.03 366196.94 00:27:18.318 00:27:18.318 10:00:34 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:18.318 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.318 No valid NVMe controllers or AIO or URING devices found 00:27:18.318 Initializing NVMe Controllers 00:27:18.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.318 Controller IO queue size 128, less than required. 00:27:18.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:18.318 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:18.318 Controller IO queue size 128, less than required. 00:27:18.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:18.318 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:18.318 WARNING: Some requested NVMe devices were skipped 00:27:18.318 10:00:35 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:18.318 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.845 Initializing NVMe Controllers 00:27:20.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:20.845 Controller IO queue size 128, less than required. 00:27:20.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.845 Controller IO queue size 128, less than required. 00:27:20.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:20.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:20.845 Initialization complete. Launching workers. 00:27:20.845 00:27:20.845 ==================== 00:27:20.845 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:20.845 TCP transport: 00:27:20.845 polls: 17271 00:27:20.845 idle_polls: 6379 00:27:20.845 sock_completions: 10892 00:27:20.845 nvme_completions: 5069 00:27:20.845 submitted_requests: 7642 00:27:20.845 queued_requests: 1 00:27:20.845 00:27:20.845 ==================== 00:27:20.845 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:20.845 TCP transport: 00:27:20.845 polls: 17497 00:27:20.845 idle_polls: 6812 00:27:20.845 sock_completions: 10685 00:27:20.845 nvme_completions: 5055 00:27:20.845 submitted_requests: 7586 00:27:20.845 queued_requests: 1 00:27:20.845 ======================================================== 00:27:20.845 Latency(us) 00:27:20.845 Device Information : IOPS MiB/s Average min max 00:27:20.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1266.86 316.72 104704.53 54965.36 150330.50 00:27:20.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1263.36 315.84 102982.42 49224.59 136308.45 00:27:20.845 ======================================================== 00:27:20.845 Total : 2530.22 632.56 103844.67 49224.59 150330.50 00:27:20.845 00:27:20.845 10:00:37 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:20.845 10:00:37 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:21.102 10:00:37 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:21.102 10:00:37 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:27:21.102 10:00:37 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:24.390 10:00:40 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=db380d48-606c-4267-acc8-741f266008bb 00:27:24.390 10:00:40 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb db380d48-606c-4267-acc8-741f266008bb 00:27:24.390 10:00:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=db380d48-606c-4267-acc8-741f266008bb 00:27:24.390 10:00:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:24.390 10:00:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:24.390 10:00:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:24.391 10:00:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:24.391 10:00:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:24.391 { 00:27:24.391 "uuid": "db380d48-606c-4267-acc8-741f266008bb", 00:27:24.391 "name": "lvs_0", 00:27:24.391 "base_bdev": "Nvme0n1", 00:27:24.391 "total_data_clusters": 238234, 00:27:24.391 "free_clusters": 238234, 00:27:24.391 "block_size": 512, 00:27:24.391 "cluster_size": 4194304 00:27:24.391 } 00:27:24.391 ]' 00:27:24.391 10:00:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="db380d48-606c-4267-acc8-741f266008bb") .free_clusters' 00:27:24.653 10:00:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:27:24.653 10:00:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="db380d48-606c-4267-acc8-741f266008bb") .cluster_size' 00:27:24.653 10:00:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:24.653 10:00:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:27:24.653 10:00:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:27:24.653 952936 00:27:24.653 10:00:41 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:27:24.653 10:00:41 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:24.653 10:00:41 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u db380d48-606c-4267-acc8-741f266008bb lbd_0 20480 00:27:24.910 10:00:41 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=ebc9b6eb-4e96-42ae-8487-dbf3add6b6e5 00:27:24.910 10:00:41 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore ebc9b6eb-4e96-42ae-8487-dbf3add6b6e5 lvs_n_0 00:27:25.846 10:00:42 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=aebac066-bfdf-48cc-8b73-aea49ee31c73 00:27:25.846 10:00:42 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb aebac066-bfdf-48cc-8b73-aea49ee31c73 00:27:25.846 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=aebac066-bfdf-48cc-8b73-aea49ee31c73 00:27:25.846 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:25.846 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:25.846 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:25.846 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:26.104 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:26.104 { 00:27:26.104 "uuid": "db380d48-606c-4267-acc8-741f266008bb", 00:27:26.104 "name": "lvs_0", 00:27:26.104 "base_bdev": "Nvme0n1", 00:27:26.104 "total_data_clusters": 238234, 00:27:26.104 "free_clusters": 233114, 00:27:26.104 "block_size": 512, 00:27:26.104 "cluster_size": 4194304 00:27:26.104 }, 00:27:26.104 { 00:27:26.104 "uuid": "aebac066-bfdf-48cc-8b73-aea49ee31c73", 00:27:26.104 "name": "lvs_n_0", 00:27:26.104 "base_bdev": "ebc9b6eb-4e96-42ae-8487-dbf3add6b6e5", 00:27:26.104 "total_data_clusters": 5114, 00:27:26.104 "free_clusters": 5114, 00:27:26.104 "block_size": 512, 00:27:26.104 "cluster_size": 4194304 00:27:26.104 } 00:27:26.104 ]' 00:27:26.104 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="aebac066-bfdf-48cc-8b73-aea49ee31c73") .free_clusters' 00:27:26.104 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:27:26.104 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="aebac066-bfdf-48cc-8b73-aea49ee31c73") .cluster_size' 00:27:26.104 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:26.104 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:27:26.104 10:00:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:27:26.104 20456 00:27:26.104 10:00:42 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:26.104 10:00:42 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aebac066-bfdf-48cc-8b73-aea49ee31c73 lbd_nest_0 20456 00:27:26.361 10:00:42 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=9685a7f2-b939-4391-a88d-a97da8991129 00:27:26.361 10:00:42 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:26.617 10:00:43 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:26.617 10:00:43 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9685a7f2-b939-4391-a88d-a97da8991129 00:27:26.873 10:00:43 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.131 10:00:43 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:27.131 10:00:43 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:27.131 10:00:43 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:27.131 10:00:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:27.131 10:00:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:27.131 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.369 Initializing NVMe Controllers 00:27:39.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:39.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:39.369 Initialization complete. Launching workers. 00:27:39.369 ======================================================== 00:27:39.369 Latency(us) 00:27:39.369 Device Information : IOPS MiB/s Average min max 00:27:39.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.10 0.02 20815.49 208.98 45885.58 00:27:39.369 ======================================================== 00:27:39.369 Total : 48.10 0.02 20815.49 208.98 45885.58 00:27:39.369 00:27:39.369 10:00:54 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:39.369 10:00:54 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:39.369 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.340 Initializing NVMe Controllers 00:27:49.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:49.340 Initialization complete. Launching workers. 00:27:49.340 ======================================================== 00:27:49.340 Latency(us) 00:27:49.340 Device Information : IOPS MiB/s Average min max 00:27:49.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 66.60 8.32 15051.16 4988.76 55867.66 00:27:49.340 ======================================================== 00:27:49.340 Total : 66.60 8.32 15051.16 4988.76 55867.66 00:27:49.340 00:27:49.340 10:01:04 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:49.340 10:01:04 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:49.340 10:01:04 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:49.340 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.311 Initializing NVMe Controllers 00:27:59.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:59.311 Initialization complete. Launching workers. 00:27:59.311 ======================================================== 00:27:59.311 Latency(us) 00:27:59.311 Device Information : IOPS MiB/s Average min max 00:27:59.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7356.48 3.59 4362.85 291.49 47870.10 00:27:59.311 ======================================================== 00:27:59.311 Total : 7356.48 3.59 4362.85 291.49 47870.10 00:27:59.311 00:27:59.311 10:01:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:59.311 10:01:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.311 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.280 Initializing NVMe Controllers 00:28:09.280 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:09.280 Initialization complete. Launching workers. 00:28:09.280 ======================================================== 00:28:09.280 Latency(us) 00:28:09.280 Device Information : IOPS MiB/s Average min max 00:28:09.280 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2455.25 306.91 13034.04 700.47 29183.87 00:28:09.280 ======================================================== 00:28:09.280 Total : 2455.25 306.91 13034.04 700.47 29183.87 00:28:09.280 00:28:09.280 10:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:09.280 10:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:09.280 10:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:09.280 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.277 Initializing NVMe Controllers 00:28:19.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:19.277 Controller IO queue size 128, less than required. 00:28:19.277 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:19.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:19.277 Initialization complete. Launching workers. 00:28:19.277 ======================================================== 00:28:19.277 Latency(us) 00:28:19.277 Device Information : IOPS MiB/s Average min max 00:28:19.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11852.20 5.79 10799.60 1782.34 24970.66 00:28:19.277 ======================================================== 00:28:19.277 Total : 11852.20 5.79 10799.60 1782.34 24970.66 00:28:19.277 00:28:19.277 10:01:35 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:19.277 10:01:35 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.277 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.481 Initializing NVMe Controllers 00:28:31.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.481 Controller IO queue size 128, less than required. 00:28:31.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:31.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:31.481 Initialization complete. Launching workers. 00:28:31.481 ======================================================== 00:28:31.481 Latency(us) 00:28:31.481 Device Information : IOPS MiB/s Average min max 00:28:31.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1202.12 150.27 106940.47 31134.50 183164.79 00:28:31.481 ======================================================== 00:28:31.481 Total : 1202.12 150.27 106940.47 31134.50 183164.79 00:28:31.481 00:28:31.481 10:01:46 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.481 10:01:46 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9685a7f2-b939-4391-a88d-a97da8991129 00:28:31.481 10:01:47 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:31.481 10:01:47 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ebc9b6eb-4e96-42ae-8487-dbf3add6b6e5 00:28:31.481 10:01:47 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:31.481 rmmod nvme_tcp 00:28:31.481 rmmod nvme_fabrics 00:28:31.481 rmmod nvme_keyring 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1996288 ']' 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1996288 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1996288 ']' 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1996288 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1996288 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1996288' 00:28:31.481 killing process with pid 1996288 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1996288 00:28:31.481 10:01:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1996288 00:28:33.385 10:01:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:33.385 10:01:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:33.385 10:01:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:33.385 10:01:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:33.385 10:01:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:33.385 10:01:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.385 10:01:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.385 10:01:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.294 10:01:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:35.294 00:28:35.294 real 1m30.717s 00:28:35.294 user 5m34.752s 00:28:35.294 sys 0m16.088s 00:28:35.294 10:01:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:35.294 10:01:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:35.294 ************************************ 00:28:35.294 END TEST nvmf_perf 00:28:35.294 ************************************ 00:28:35.294 10:01:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:35.294 10:01:51 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:35.294 10:01:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:35.294 10:01:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.294 10:01:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:35.294 ************************************ 00:28:35.294 START TEST nvmf_fio_host 00:28:35.294 ************************************ 00:28:35.294 10:01:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:35.294 * Looking for test storage... 00:28:35.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:35.295 10:01:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:37.202 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:37.202 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.202 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:37.203 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:37.203 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:37.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:28:37.203 00:28:37.203 --- 10.0.0.2 ping statistics --- 00:28:37.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.203 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:28:37.203 00:28:37.203 --- 10.0.0.1 ping statistics --- 00:28:37.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.203 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2008250 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2008250 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2008250 ']' 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.203 10:01:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.203 [2024-07-15 10:01:53.925004] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:28:37.203 [2024-07-15 10:01:53.925076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.203 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.203 [2024-07-15 10:01:53.962308] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:37.461 [2024-07-15 10:01:53.989493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.461 [2024-07-15 10:01:54.074872] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.461 [2024-07-15 10:01:54.074945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.461 [2024-07-15 10:01:54.074972] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.461 [2024-07-15 10:01:54.074983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.461 [2024-07-15 10:01:54.074992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.461 [2024-07-15 10:01:54.075055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.461 [2024-07-15 10:01:54.075143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.461 [2024-07-15 10:01:54.075204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.461 [2024-07-15 10:01:54.075206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.461 10:01:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:37.461 10:01:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:28:37.461 10:01:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:37.719 [2024-07-15 10:01:54.439537] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.719 10:01:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:37.719 10:01:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:37.719 10:01:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.719 10:01:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:37.977 Malloc1 00:28:37.977 10:01:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.544 10:01:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:38.544 10:01:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.802 [2024-07-15 10:01:55.571615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:39.058 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:39.314 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:39.314 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:39.314 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:39.314 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:39.314 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:39.314 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:39.314 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:39.314 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:39.314 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:39.314 10:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:39.314 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:39.314 fio-3.35 00:28:39.314 Starting 1 thread 00:28:39.314 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.839 00:28:41.839 test: (groupid=0, jobs=1): err= 0: pid=2008607: Mon Jul 15 10:01:58 2024 00:28:41.839 read: IOPS=9130, BW=35.7MiB/s (37.4MB/s)(71.5MiB/2006msec) 00:28:41.839 slat (usec): min=2, max=161, avg= 2.77, stdev= 2.07 00:28:41.839 clat (usec): min=2434, max=13377, avg=7740.94, stdev=584.51 00:28:41.839 lat (usec): min=2463, max=13380, avg=7743.70, stdev=584.38 00:28:41.839 clat percentiles (usec): 00:28:41.839 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:28:41.839 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:28:41.839 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:28:41.839 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[12518], 99.95th=[12780], 00:28:41.839 | 99.99th=[13304] 00:28:41.839 bw ( KiB/s): min=35568, max=36984, per=99.89%, avg=36482.00, stdev=625.65, samples=4 00:28:41.839 iops : min= 8892, max= 9246, avg=9120.50, stdev=156.41, samples=4 00:28:41.839 write: IOPS=9141, BW=35.7MiB/s (37.4MB/s)(71.6MiB/2006msec); 0 zone resets 00:28:41.839 slat (usec): min=2, max=150, avg= 2.91, stdev= 1.69 00:28:41.839 clat (usec): min=1453, max=11643, avg=6231.31, stdev=494.92 00:28:41.839 lat (usec): min=1462, max=11645, avg=6234.22, stdev=494.86 00:28:41.839 clat percentiles (usec): 00:28:41.839 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:28:41.839 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:28:41.839 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6980], 00:28:41.839 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9503], 99.95th=[ 9896], 00:28:41.839 | 99.99th=[11600] 00:28:41.839 bw ( KiB/s): min=36360, max=36840, per=100.00%, avg=36572.00, stdev=224.43, samples=4 00:28:41.839 iops : min= 9090, max= 9210, avg=9143.00, stdev=56.11, samples=4 00:28:41.839 lat (msec) : 2=0.02%, 4=0.11%, 10=99.75%, 20=0.11% 00:28:41.839 cpu : usr=61.40%, sys=33.67%, ctx=68, majf=0, minf=41 00:28:41.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:41.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.839 issued rwts: total=18316,18338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.839 00:28:41.839 Run status group 0 (all jobs): 00:28:41.839 READ: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.5MiB (75.0MB), run=2006-2006msec 00:28:41.839 WRITE: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.6MiB (75.1MB), run=2006-2006msec 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:41.839 10:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:41.839 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:41.839 fio-3.35 00:28:41.839 Starting 1 thread 00:28:41.839 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.399 00:28:44.399 test: (groupid=0, jobs=1): err= 0: pid=2008946: Mon Jul 15 10:02:00 2024 00:28:44.399 read: IOPS=8505, BW=133MiB/s (139MB/s)(267MiB/2008msec) 00:28:44.399 slat (usec): min=2, max=105, avg= 3.73, stdev= 1.57 00:28:44.399 clat (usec): min=1923, max=16449, avg=8772.26, stdev=2056.44 00:28:44.399 lat (usec): min=1926, max=16453, avg=8775.99, stdev=2056.47 00:28:44.399 clat percentiles (usec): 00:28:44.399 | 1.00th=[ 4686], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6980], 00:28:44.399 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:28:44.399 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[11469], 95.00th=[12256], 00:28:44.399 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15533], 99.95th=[15533], 00:28:44.399 | 99.99th=[15795] 00:28:44.399 bw ( KiB/s): min=63456, max=78656, per=51.94%, avg=70680.00, stdev=8073.96, samples=4 00:28:44.399 iops : min= 3966, max= 4916, avg=4417.50, stdev=504.62, samples=4 00:28:44.399 write: IOPS=5004, BW=78.2MiB/s (82.0MB/s)(145MiB/1848msec); 0 zone resets 00:28:44.399 slat (usec): min=30, max=184, avg=33.82, stdev= 5.32 00:28:44.399 clat (usec): min=4935, max=19482, avg=10985.58, stdev=2054.61 00:28:44.399 lat (usec): min=4967, max=19514, avg=11019.41, stdev=2054.95 00:28:44.399 clat percentiles (usec): 00:28:44.399 | 1.00th=[ 7308], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9241], 00:28:44.399 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10552], 60.00th=[11076], 00:28:44.399 | 70.00th=[11731], 80.00th=[12649], 90.00th=[14091], 95.00th=[14877], 00:28:44.399 | 99.00th=[16188], 99.50th=[17171], 99.90th=[18744], 99.95th=[19268], 00:28:44.399 | 99.99th=[19530] 00:28:44.399 bw ( KiB/s): min=65856, max=80960, per=91.57%, avg=73320.00, stdev=8211.28, samples=4 00:28:44.399 iops : min= 4116, max= 5060, avg=4582.50, stdev=513.21, samples=4 00:28:44.400 lat (msec) : 2=0.01%, 4=0.12%, 10=60.94%, 20=38.93% 00:28:44.400 cpu : usr=76.68%, sys=20.18%, ctx=42, majf=0, minf=63 00:28:44.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:44.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:44.400 issued rwts: total=17079,9248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.400 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:44.400 00:28:44.400 Run status group 0 (all jobs): 00:28:44.400 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=267MiB (280MB), run=2008-2008msec 00:28:44.400 WRITE: bw=78.2MiB/s (82.0MB/s), 78.2MiB/s-78.2MiB/s (82.0MB/s-82.0MB/s), io=145MiB (152MB), run=1848-1848msec 00:28:44.400 10:02:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:28:44.400 10:02:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:28:47.683 Nvme0n1 00:28:47.683 10:02:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=6b89693a-3bd4-41b8-b558-00ff2a5893d3 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 6b89693a-3bd4-41b8-b558-00ff2a5893d3 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=6b89693a-3bd4-41b8-b558-00ff2a5893d3 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:50.971 { 00:28:50.971 "uuid": "6b89693a-3bd4-41b8-b558-00ff2a5893d3", 00:28:50.971 "name": "lvs_0", 00:28:50.971 "base_bdev": "Nvme0n1", 00:28:50.971 "total_data_clusters": 930, 00:28:50.971 "free_clusters": 930, 00:28:50.971 "block_size": 512, 00:28:50.971 "cluster_size": 1073741824 00:28:50.971 } 00:28:50.971 ]' 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="6b89693a-3bd4-41b8-b558-00ff2a5893d3") .free_clusters' 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="6b89693a-3bd4-41b8-b558-00ff2a5893d3") .cluster_size' 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:28:50.971 952320 00:28:50.971 10:02:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:28:51.230 bc1fe630-7c58-4cb3-b62a-a40e17c03c0b 00:28:51.230 10:02:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:51.488 10:02:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:51.746 10:02:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:52.004 10:02:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:52.262 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:52.262 fio-3.35 00:28:52.262 Starting 1 thread 00:28:52.262 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.790 00:28:54.790 test: (groupid=0, jobs=1): err= 0: pid=2010342: Mon Jul 15 10:02:11 2024 00:28:54.790 read: IOPS=5486, BW=21.4MiB/s (22.5MB/s)(43.0MiB/2007msec) 00:28:54.790 slat (usec): min=2, max=128, avg= 2.72, stdev= 1.94 00:28:54.790 clat (usec): min=1427, max=172295, avg=12868.55, stdev=12125.70 00:28:54.790 lat (usec): min=1430, max=172331, avg=12871.27, stdev=12125.93 00:28:54.790 clat percentiles (msec): 00:28:54.790 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:28:54.790 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:28:54.790 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:28:54.790 | 99.00th=[ 15], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:28:54.790 | 99.99th=[ 174] 00:28:54.790 bw ( KiB/s): min=15528, max=24072, per=99.60%, avg=21858.00, stdev=4221.70, samples=4 00:28:54.790 iops : min= 3882, max= 6018, avg=5464.50, stdev=1055.43, samples=4 00:28:54.790 write: IOPS=5453, BW=21.3MiB/s (22.3MB/s)(42.8MiB/2007msec); 0 zone resets 00:28:54.790 slat (usec): min=2, max=106, avg= 2.84, stdev= 1.60 00:28:54.790 clat (usec): min=405, max=169857, avg=10365.13, stdev=11389.46 00:28:54.790 lat (usec): min=408, max=169863, avg=10367.97, stdev=11389.68 00:28:54.790 clat percentiles (msec): 00:28:54.790 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:28:54.790 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:28:54.790 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 11], 00:28:54.790 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:28:54.790 | 99.99th=[ 169] 00:28:54.790 bw ( KiB/s): min=16488, max=23936, per=99.94%, avg=21802.00, stdev=3556.64, samples=4 00:28:54.790 iops : min= 4122, max= 5984, avg=5450.50, stdev=889.16, samples=4 00:28:54.790 lat (usec) : 500=0.01%, 750=0.01% 00:28:54.790 lat (msec) : 2=0.03%, 4=0.08%, 10=36.68%, 20=62.61%, 250=0.58% 00:28:54.791 cpu : usr=55.23%, sys=40.93%, ctx=109, majf=0, minf=41 00:28:54.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:28:54.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:54.791 issued rwts: total=11011,10946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:54.791 00:28:54.791 Run status group 0 (all jobs): 00:28:54.791 READ: bw=21.4MiB/s (22.5MB/s), 21.4MiB/s-21.4MiB/s (22.5MB/s-22.5MB/s), io=43.0MiB (45.1MB), run=2007-2007msec 00:28:54.791 WRITE: bw=21.3MiB/s (22.3MB/s), 21.3MiB/s-21.3MiB/s (22.3MB/s-22.3MB/s), io=42.8MiB (44.8MB), run=2007-2007msec 00:28:54.791 10:02:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:54.791 10:02:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=f762a0d3-8cbb-4eda-bfa1-f5d765d0b0ac 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb f762a0d3-8cbb-4eda-bfa1-f5d765d0b0ac 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=f762a0d3-8cbb-4eda-bfa1-f5d765d0b0ac 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:56.171 { 00:28:56.171 "uuid": "6b89693a-3bd4-41b8-b558-00ff2a5893d3", 00:28:56.171 "name": "lvs_0", 00:28:56.171 "base_bdev": "Nvme0n1", 00:28:56.171 "total_data_clusters": 930, 00:28:56.171 "free_clusters": 0, 00:28:56.171 "block_size": 512, 00:28:56.171 "cluster_size": 1073741824 00:28:56.171 }, 00:28:56.171 { 00:28:56.171 "uuid": "f762a0d3-8cbb-4eda-bfa1-f5d765d0b0ac", 00:28:56.171 "name": "lvs_n_0", 00:28:56.171 "base_bdev": "bc1fe630-7c58-4cb3-b62a-a40e17c03c0b", 00:28:56.171 "total_data_clusters": 237847, 00:28:56.171 "free_clusters": 237847, 00:28:56.171 "block_size": 512, 00:28:56.171 "cluster_size": 4194304 00:28:56.171 } 00:28:56.171 ]' 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f762a0d3-8cbb-4eda-bfa1-f5d765d0b0ac") .free_clusters' 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:28:56.171 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f762a0d3-8cbb-4eda-bfa1-f5d765d0b0ac") .cluster_size' 00:28:56.428 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:56.428 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:28:56.428 10:02:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:28:56.428 951388 00:28:56.428 10:02:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:28:56.990 8a98349e-91cd-4f1c-a620-f0eb464bb54a 00:28:56.990 10:02:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:57.247 10:02:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:57.554 10:02:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:57.812 10:02:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:58.068 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:58.068 fio-3.35 00:28:58.068 Starting 1 thread 00:28:58.068 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.593 00:29:00.593 test: (groupid=0, jobs=1): err= 0: pid=2011080: Mon Jul 15 10:02:17 2024 00:29:00.593 read: IOPS=5655, BW=22.1MiB/s (23.2MB/s)(45.3MiB/2050msec) 00:29:00.593 slat (usec): min=2, max=122, avg= 2.83, stdev= 2.19 00:29:00.593 clat (usec): min=4431, max=62613, avg=12550.70, stdev=3675.65 00:29:00.593 lat (usec): min=4435, max=62616, avg=12553.53, stdev=3675.65 00:29:00.593 clat percentiles (usec): 00:29:00.593 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:29:00.593 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:29:00.593 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[13960], 00:29:00.593 | 99.00th=[14746], 99.50th=[51643], 99.90th=[61080], 99.95th=[62129], 00:29:00.593 | 99.99th=[62653] 00:29:00.593 bw ( KiB/s): min=21952, max=23600, per=100.00%, avg=23038.00, stdev=738.04, samples=4 00:29:00.593 iops : min= 5488, max= 5900, avg=5759.50, stdev=184.51, samples=4 00:29:00.593 write: IOPS=5640, BW=22.0MiB/s (23.1MB/s)(45.2MiB/2050msec); 0 zone resets 00:29:00.593 slat (usec): min=2, max=109, avg= 2.99, stdev= 2.05 00:29:00.593 clat (usec): min=2082, max=61762, avg=9988.67, stdev=3169.32 00:29:00.593 lat (usec): min=2088, max=61765, avg=9991.66, stdev=3169.30 00:29:00.593 clat percentiles (usec): 00:29:00.593 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:29:00.593 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:29:00.593 | 70.00th=[10159], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:29:00.593 | 99.00th=[11863], 99.50th=[12649], 99.90th=[58459], 99.95th=[59507], 00:29:00.593 | 99.99th=[61080] 00:29:00.593 bw ( KiB/s): min=22912, max=23136, per=100.00%, avg=23022.00, stdev=92.92, samples=4 00:29:00.593 iops : min= 5728, max= 5784, avg=5755.50, stdev=23.23, samples=4 00:29:00.593 lat (msec) : 4=0.05%, 10=30.60%, 20=68.81%, 50=0.01%, 100=0.54% 00:29:00.593 cpu : usr=56.52%, sys=40.41%, ctx=103, majf=0, minf=41 00:29:00.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:00.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:00.593 issued rwts: total=11594,11563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:00.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:00.593 00:29:00.593 Run status group 0 (all jobs): 00:29:00.593 READ: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=45.3MiB (47.5MB), run=2050-2050msec 00:29:00.593 WRITE: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=45.2MiB (47.4MB), run=2050-2050msec 00:29:00.593 10:02:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:00.593 10:02:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:00.593 10:02:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:04.819 10:02:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:04.819 10:02:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:08.098 10:02:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:08.098 10:02:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:09.994 rmmod nvme_tcp 00:29:09.994 rmmod nvme_fabrics 00:29:09.994 rmmod nvme_keyring 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2008250 ']' 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2008250 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2008250 ']' 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2008250 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2008250 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2008250' 00:29:09.994 killing process with pid 2008250 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2008250 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2008250 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.994 10:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.524 10:02:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:12.524 00:29:12.524 real 0m37.016s 00:29:12.524 user 2m21.200s 00:29:12.524 sys 0m7.244s 00:29:12.524 10:02:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.524 10:02:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.524 ************************************ 00:29:12.524 END TEST nvmf_fio_host 00:29:12.524 ************************************ 00:29:12.524 10:02:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:12.524 10:02:28 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:12.524 10:02:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:12.524 10:02:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.524 10:02:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.524 ************************************ 00:29:12.524 START TEST nvmf_failover 00:29:12.524 ************************************ 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:12.524 * Looking for test storage... 00:29:12.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:12.524 10:02:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.425 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:14.426 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:14.426 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:14.426 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:14.426 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:14.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:29:14.426 00:29:14.426 --- 10.0.0.2 ping statistics --- 00:29:14.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.426 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:29:14.426 00:29:14.426 --- 10.0.0.1 ping statistics --- 00:29:14.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.426 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2014323 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2014323 00:29:14.426 10:02:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2014323 ']' 00:29:14.427 10:02:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.427 10:02:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.427 10:02:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.427 10:02:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.427 10:02:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:14.427 [2024-07-15 10:02:31.009116] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:29:14.427 [2024-07-15 10:02:31.009200] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.427 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.427 [2024-07-15 10:02:31.047909] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:14.427 [2024-07-15 10:02:31.078202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:14.427 [2024-07-15 10:02:31.169475] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.427 [2024-07-15 10:02:31.169534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.427 [2024-07-15 10:02:31.169549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.427 [2024-07-15 10:02:31.169563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.427 [2024-07-15 10:02:31.169581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.427 [2024-07-15 10:02:31.169664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.427 [2024-07-15 10:02:31.169783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:14.427 [2024-07-15 10:02:31.169788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.685 10:02:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:14.685 10:02:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:14.685 10:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:14.685 10:02:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:14.685 10:02:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:14.685 10:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.685 10:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:14.943 [2024-07-15 10:02:31.586658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.943 10:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:15.201 Malloc0 00:29:15.201 10:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.459 10:02:32 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.717 10:02:32 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.975 [2024-07-15 10:02:32.699783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.975 10:02:32 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:16.233 [2024-07-15 10:02:32.960555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:16.233 10:02:32 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:16.491 [2024-07-15 10:02:33.201332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:16.491 10:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2014608 00:29:16.491 10:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:16.491 10:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:16.491 10:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2014608 /var/tmp/bdevperf.sock 00:29:16.491 10:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2014608 ']' 00:29:16.491 10:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:16.491 10:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:16.491 10:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:16.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:16.491 10:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:16.491 10:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:16.748 10:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:16.748 10:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:16.748 10:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:17.314 NVMe0n1 00:29:17.314 10:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:17.571 00:29:17.571 10:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2014742 00:29:17.571 10:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:17.571 10:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:18.945 10:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.945 [2024-07-15 10:02:35.533286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.533993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.945 [2024-07-15 10:02:35.534430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 [2024-07-15 10:02:35.534762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(5) to be set 00:29:18.946 10:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:22.242 10:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:22.242 00:29:22.242 10:02:39 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:22.500 [2024-07-15 10:02:39.238169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.500 [2024-07-15 10:02:39.238235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 [2024-07-15 10:02:39.238807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea970 is same with the state(5) to be set 00:29:22.501 10:02:39 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:25.781 10:02:42 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.781 [2024-07-15 10:02:42.533101] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.781 10:02:42 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:27.153 10:02:43 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:27.153 10:02:43 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2014742 00:29:33.715 0 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2014608 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2014608 ']' 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2014608 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2014608 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2014608' 00:29:33.715 killing process with pid 2014608 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2014608 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2014608 00:29:33.715 10:02:49 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:33.715 [2024-07-15 10:02:33.260614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:29:33.715 [2024-07-15 10:02:33.260698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014608 ] 00:29:33.715 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.715 [2024-07-15 10:02:33.292798] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:33.715 [2024-07-15 10:02:33.321015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.715 [2024-07-15 10:02:33.406095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.715 Running I/O for 15 seconds... 00:29:33.715 [2024-07-15 10:02:35.535731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.715 [2024-07-15 10:02:35.535772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.715 [2024-07-15 10:02:35.535798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.715 [2024-07-15 10:02:35.535813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.715 [2024-07-15 10:02:35.535830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.715 [2024-07-15 10:02:35.535843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.715 [2024-07-15 10:02:35.535872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.715 [2024-07-15 10:02:35.535895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.715 [2024-07-15 10:02:35.535911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.715 [2024-07-15 10:02:35.535925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.715 [2024-07-15 10:02:35.535940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.715 [2024-07-15 10:02:35.535954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.715 [2024-07-15 10:02:35.535969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.715 [2024-07-15 10:02:35.535983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.715 [2024-07-15 10:02:35.535998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.715 [2024-07-15 10:02:35.536012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.715 [2024-07-15 10:02:35.536027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.715 [2024-07-15 10:02:35.536040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.715 [2024-07-15 10:02:35.536056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.715 [2024-07-15 10:02:35.536069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.536973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.536986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.716 [2024-07-15 10:02:35.537001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.716 [2024-07-15 10:02:35.537015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.717 [2024-07-15 10:02:35.537643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.717 [2024-07-15 10:02:35.537672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.717 [2024-07-15 10:02:35.537701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.717 [2024-07-15 10:02:35.537729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.717 [2024-07-15 10:02:35.537758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.717 [2024-07-15 10:02:35.537786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.717 [2024-07-15 10:02:35.537814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.717 [2024-07-15 10:02:35.537842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.717 [2024-07-15 10:02:35.537870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.717 [2024-07-15 10:02:35.537892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.537906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.537921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.537938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.537953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.537967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.537982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.537995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.718 [2024-07-15 10:02:35.538858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.718 [2024-07-15 10:02:35.538873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.719 [2024-07-15 10:02:35.538894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.538910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.719 [2024-07-15 10:02:35.538923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.538938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.719 [2024-07-15 10:02:35.538952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.538967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.719 [2024-07-15 10:02:35.538988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.719 [2024-07-15 10:02:35.539018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.719 [2024-07-15 10:02:35.539052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78856 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78872 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78880 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78888 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.719 [2024-07-15 10:02:35.539828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.719 [2024-07-15 10:02:35.539840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:29:33.719 [2024-07-15 10:02:35.539852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.719 [2024-07-15 10:02:35.539914] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f21f10 was disconnected and freed. reset controller. 00:29:33.719 [2024-07-15 10:02:35.539932] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:33.719 [2024-07-15 10:02:35.539965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.720 [2024-07-15 10:02:35.539982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:35.539998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.720 [2024-07-15 10:02:35.540011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:35.540024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.720 [2024-07-15 10:02:35.540037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:35.540051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.720 [2024-07-15 10:02:35.540064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:35.540077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.720 [2024-07-15 10:02:35.540124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efb850 (9): Bad file descriptor 00:29:33.720 [2024-07-15 10:02:35.543362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.720 [2024-07-15 10:02:35.705776] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:33.720 [2024-07-15 10:02:39.239223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.720 [2024-07-15 10:02:39.239833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.720 [2024-07-15 10:02:39.239846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.239860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.239873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.239913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.239927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.239942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.239956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.239970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.239984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.239999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.721 [2024-07-15 10:02:39.240882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.721 [2024-07-15 10:02:39.240914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.240934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.240949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.240964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.240978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.240994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.722 [2024-07-15 10:02:39.241854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.722 [2024-07-15 10:02:39.241867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.241903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.241919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.241934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.241947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.241962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.241975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.241990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.723 [2024-07-15 10:02:39.242327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.723 [2024-07-15 10:02:39.242378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114064 len:8 PRP1 0x0 PRP2 0x0 00:29:33.723 [2024-07-15 10:02:39.242391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.723 [2024-07-15 10:02:39.242419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.723 [2024-07-15 10:02:39.242430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114072 len:8 PRP1 0x0 PRP2 0x0 00:29:33.723 [2024-07-15 10:02:39.242447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.723 [2024-07-15 10:02:39.242472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.723 [2024-07-15 10:02:39.242483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114080 len:8 PRP1 0x0 PRP2 0x0 00:29:33.723 [2024-07-15 10:02:39.242496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.723 [2024-07-15 10:02:39.242519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.723 [2024-07-15 10:02:39.242531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114088 len:8 PRP1 0x0 PRP2 0x0 00:29:33.723 [2024-07-15 10:02:39.242543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.723 [2024-07-15 10:02:39.242567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.723 [2024-07-15 10:02:39.242578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114096 len:8 PRP1 0x0 PRP2 0x0 00:29:33.723 [2024-07-15 10:02:39.242590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.723 [2024-07-15 10:02:39.242613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.723 [2024-07-15 10:02:39.242624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114104 len:8 PRP1 0x0 PRP2 0x0 00:29:33.723 [2024-07-15 10:02:39.242636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.723 [2024-07-15 10:02:39.242659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.723 [2024-07-15 10:02:39.242670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114112 len:8 PRP1 0x0 PRP2 0x0 00:29:33.723 [2024-07-15 10:02:39.242682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.723 [2024-07-15 10:02:39.242705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.723 [2024-07-15 10:02:39.242716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114120 len:8 PRP1 0x0 PRP2 0x0 00:29:33.723 [2024-07-15 10:02:39.242728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.723 [2024-07-15 10:02:39.242751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.723 [2024-07-15 10:02:39.242762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114128 len:8 PRP1 0x0 PRP2 0x0 00:29:33.723 [2024-07-15 10:02:39.242774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.723 [2024-07-15 10:02:39.242797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.723 [2024-07-15 10:02:39.242812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114136 len:8 PRP1 0x0 PRP2 0x0 00:29:33.723 [2024-07-15 10:02:39.242825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.723 [2024-07-15 10:02:39.242837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.723 [2024-07-15 10:02:39.242848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.242859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114144 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.242871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.242900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.242916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.242928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114152 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.242941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.242954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.242965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.242976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114160 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.242989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114168 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114176 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114184 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114192 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114200 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114208 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114216 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114224 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114232 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114240 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113664 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.724 [2024-07-15 10:02:39.243544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.724 [2024-07-15 10:02:39.243555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113672 len:8 PRP1 0x0 PRP2 0x0 00:29:33.724 [2024-07-15 10:02:39.243572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243630] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c6680 was disconnected and freed. reset controller. 00:29:33.724 [2024-07-15 10:02:39.243648] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:33.724 [2024-07-15 10:02:39.243682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.724 [2024-07-15 10:02:39.243699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.724 [2024-07-15 10:02:39.243727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.724 [2024-07-15 10:02:39.243753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.724 [2024-07-15 10:02:39.243779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:39.243792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.724 [2024-07-15 10:02:39.243843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efb850 (9): Bad file descriptor 00:29:33.724 [2024-07-15 10:02:39.247083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.724 [2024-07-15 10:02:39.281580] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:33.724 [2024-07-15 10:02:43.773908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.724 [2024-07-15 10:02:43.773973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.724 [2024-07-15 10:02:43.773991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.724 [2024-07-15 10:02:43.774005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.774020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.725 [2024-07-15 10:02:43.774033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.774047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.725 [2024-07-15 10:02:43.774061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.774074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efb850 is same with the state(5) to be set 00:29:33.725 [2024-07-15 10:02:43.777943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.777970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.777997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.725 [2024-07-15 10:02:43.778464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.725 [2024-07-15 10:02:43.778870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.725 [2024-07-15 10:02:43.778912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.726 [2024-07-15 10:02:43.778934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.726 [2024-07-15 10:02:43.778950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.726 [2024-07-15 10:02:43.778966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.726 [2024-07-15 10:02:43.778980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.726 [2024-07-15 10:02:43.778995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.726 [2024-07-15 10:02:43.779009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.726 [2024-07-15 10:02:43.779024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.726 [2024-07-15 10:02:43.779037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.726 [2024-07-15 10:02:43.779052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.726 [2024-07-15 10:02:43.779066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.726 [2024-07-15 10:02:43.779081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.726 [2024-07-15 10:02:43.779095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.727 [2024-07-15 10:02:43.779551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.727 [2024-07-15 10:02:43.779564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.779983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.779998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.728 [2024-07-15 10:02:43.780530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.728 [2024-07-15 10:02:43.780545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.780977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.780995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.781009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.781038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.729 [2024-07-15 10:02:43.781066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.729 [2024-07-15 10:02:43.781489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.729 [2024-07-15 10:02:43.781503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.730 [2024-07-15 10:02:43.781518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.730 [2024-07-15 10:02:43.781531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.730 [2024-07-15 10:02:43.781547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-07-15 10:02:43.781561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.730 [2024-07-15 10:02:43.781576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-07-15 10:02:43.781589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.730 [2024-07-15 10:02:43.781604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-07-15 10:02:43.781618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.730 [2024-07-15 10:02:43.781634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-07-15 10:02:43.781648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.730 [2024-07-15 10:02:43.781663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-07-15 10:02:43.781677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.730 [2024-07-15 10:02:43.781692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.730 [2024-07-15 10:02:43.781706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.730 [2024-07-15 10:02:43.781721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6470 is same with the state(5) to be set 00:29:33.730 [2024-07-15 10:02:43.781741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.730 [2024-07-15 10:02:43.781753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.730 [2024-07-15 10:02:43.781765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31248 len:8 PRP1 0x0 PRP2 0x0 00:29:33.730 [2024-07-15 10:02:43.781778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.730 [2024-07-15 10:02:43.781842] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c6470 was disconnected and freed. reset controller. 00:29:33.730 [2024-07-15 10:02:43.781860] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:33.730 [2024-07-15 10:02:43.781881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.730 [2024-07-15 10:02:43.785145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.730 [2024-07-15 10:02:43.785187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efb850 (9): Bad file descriptor 00:29:33.730 [2024-07-15 10:02:43.909764] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:33.730 00:29:33.730 Latency(us) 00:29:33.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.730 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:33.730 Verification LBA range: start 0x0 length 0x4000 00:29:33.730 NVMe0n1 : 15.01 8406.72 32.84 840.44 0.00 13815.14 794.93 15631.55 00:29:33.730 =================================================================================================================== 00:29:33.730 Total : 8406.72 32.84 840.44 0.00 13815.14 794.93 15631.55 00:29:33.730 Received shutdown signal, test time was about 15.000000 seconds 00:29:33.730 00:29:33.730 Latency(us) 00:29:33.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.730 =================================================================================================================== 00:29:33.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2016476 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2016476 /var/tmp/bdevperf.sock 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2016476 ']' 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:33.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:33.730 10:02:49 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:33.730 [2024-07-15 10:02:50.225537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:33.730 10:02:50 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:33.730 [2024-07-15 10:02:50.462220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:33.730 10:02:50 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.296 NVMe0n1 00:29:34.296 10:02:50 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.553 00:29:34.553 10:02:51 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.811 00:29:35.068 10:02:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:35.068 10:02:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:35.068 10:02:51 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:35.633 10:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:38.910 10:02:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:38.910 10:02:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:38.910 10:02:55 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2017224 00:29:38.910 10:02:55 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:38.910 10:02:55 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2017224 00:29:39.844 0 00:29:39.844 10:02:56 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:39.844 [2024-07-15 10:02:49.738359] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:29:39.844 [2024-07-15 10:02:49.738444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016476 ] 00:29:39.844 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.844 [2024-07-15 10:02:49.774325] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:39.844 [2024-07-15 10:02:49.802888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.844 [2024-07-15 10:02:49.888527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.844 [2024-07-15 10:02:52.096247] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:39.844 [2024-07-15 10:02:52.096327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.844 [2024-07-15 10:02:52.096349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.844 [2024-07-15 10:02:52.096381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.844 [2024-07-15 10:02:52.096394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.844 [2024-07-15 10:02:52.096408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.845 [2024-07-15 10:02:52.096422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.845 [2024-07-15 10:02:52.096436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.845 [2024-07-15 10:02:52.096449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.845 [2024-07-15 10:02:52.096462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.845 [2024-07-15 10:02:52.096506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.845 [2024-07-15 10:02:52.096537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182b850 (9): Bad file descriptor 00:29:39.845 [2024-07-15 10:02:52.148521] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:39.845 Running I/O for 1 seconds... 00:29:39.845 00:29:39.845 Latency(us) 00:29:39.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.845 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:39.845 Verification LBA range: start 0x0 length 0x4000 00:29:39.845 NVMe0n1 : 1.01 8712.79 34.03 0.00 0.00 14630.85 3070.48 12718.84 00:29:39.845 =================================================================================================================== 00:29:39.845 Total : 8712.79 34.03 0.00 0.00 14630.85 3070.48 12718.84 00:29:39.845 10:02:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:39.845 10:02:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:40.103 10:02:56 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:40.361 10:02:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:40.361 10:02:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:40.619 10:02:57 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:40.876 10:02:57 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2016476 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2016476 ']' 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2016476 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2016476 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2016476' 00:29:44.157 killing process with pid 2016476 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2016476 00:29:44.157 10:03:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2016476 00:29:44.415 10:03:01 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:44.415 10:03:01 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:44.673 rmmod nvme_tcp 00:29:44.673 rmmod nvme_fabrics 00:29:44.673 rmmod nvme_keyring 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2014323 ']' 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2014323 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2014323 ']' 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2014323 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2014323 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2014323' 00:29:44.673 killing process with pid 2014323 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2014323 00:29:44.673 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2014323 00:29:44.931 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:44.931 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:44.931 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:44.931 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:44.931 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:44.931 10:03:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.931 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.931 10:03:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.467 10:03:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:47.467 00:29:47.467 real 0m34.780s 00:29:47.467 user 2m2.936s 00:29:47.467 sys 0m5.693s 00:29:47.467 10:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:47.467 10:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:47.467 ************************************ 00:29:47.467 END TEST nvmf_failover 00:29:47.467 ************************************ 00:29:47.467 10:03:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:47.467 10:03:03 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:47.467 10:03:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:47.467 10:03:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:47.467 10:03:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.467 ************************************ 00:29:47.467 START TEST nvmf_host_discovery 00:29:47.467 ************************************ 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:47.467 * Looking for test storage... 00:29:47.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.467 10:03:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:29:47.468 10:03:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.368 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.368 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:29:49.368 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:49.368 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:49.368 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:49.368 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:49.368 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:49.369 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:49.369 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:49.369 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:49.369 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:49.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:29:49.369 00:29:49.369 --- 10.0.0.2 ping statistics --- 00:29:49.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.369 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:29:49.369 00:29:49.369 --- 10.0.0.1 ping statistics --- 00:29:49.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.369 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2019957 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2019957 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2019957 ']' 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:49.369 10:03:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.369 [2024-07-15 10:03:05.899704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:29:49.369 [2024-07-15 10:03:05.899790] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.369 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.369 [2024-07-15 10:03:05.937356] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:49.369 [2024-07-15 10:03:05.963430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.370 [2024-07-15 10:03:06.049736] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.370 [2024-07-15 10:03:06.049791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.370 [2024-07-15 10:03:06.049807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.370 [2024-07-15 10:03:06.049821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.370 [2024-07-15 10:03:06.049833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.370 [2024-07-15 10:03:06.049872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.628 [2024-07-15 10:03:06.190749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.628 [2024-07-15 10:03:06.198951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.628 null0 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.628 null1 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2019985 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2019985 /tmp/host.sock 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2019985 ']' 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:49.628 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:49.628 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.628 [2024-07-15 10:03:06.272121] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:29:49.628 [2024-07-15 10:03:06.272205] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019985 ] 00:29:49.628 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.628 [2024-07-15 10:03:06.305642] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:49.628 [2024-07-15 10:03:06.337008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.886 [2024-07-15 10:03:06.428521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:49.886 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 [2024-07-15 10:03:06.836685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:50.144 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:50.402 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.403 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:50.403 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.403 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:50.403 10:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:50.403 10:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.403 10:03:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:29:50.403 10:03:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:29:50.969 [2024-07-15 10:03:07.608721] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:50.969 [2024-07-15 10:03:07.608767] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:50.969 [2024-07-15 10:03:07.608793] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:50.969 [2024-07-15 10:03:07.738199] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:51.227 [2024-07-15 10:03:07.839750] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:51.227 [2024-07-15 10:03:07.839774] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:51.485 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:51.486 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:51.743 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.743 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:51.743 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:51.743 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:51.743 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:51.743 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:51.743 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:51.743 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:51.743 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.744 [2024-07-15 10:03:08.401208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:51.744 [2024-07-15 10:03:08.401663] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:51.744 [2024-07-15 10:03:08.401714] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:51.744 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.002 [2024-07-15 10:03:08.528573] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:52.002 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:52.002 10:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:29:52.002 [2024-07-15 10:03:08.593142] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:52.002 [2024-07-15 10:03:08.593173] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:52.002 [2024-07-15 10:03:08.593182] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.964 [2024-07-15 10:03:09.629614] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:52.964 [2024-07-15 10:03:09.629651] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.964 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:52.965 [2024-07-15 10:03:09.638707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.965 [2024-07-15 10:03:09.638738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.965 [2024-07-15 10:03:09.638755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.965 [2024-07-15 10:03:09.638784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.965 [2024-07-15 10:03:09.638798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.965 [2024-07-15 10:03:09.638811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.965 [2024-07-15 10:03:09.638825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.965 [2024-07-15 10:03:09.638838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.965 [2024-07-15 10:03:09.638852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4c6c0 is same with the state(5) to be set 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.965 [2024-07-15 10:03:09.648700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4c6c0 (9): Bad file descriptor 00:29:52.965 [2024-07-15 10:03:09.658741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:52.965 [2024-07-15 10:03:09.658981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-07-15 10:03:09.659011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4c6c0 with addr=10.0.0.2, port=4420 00:29:52.965 [2024-07-15 10:03:09.659029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4c6c0 is same with the state(5) to be set 00:29:52.965 [2024-07-15 10:03:09.659051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4c6c0 (9): Bad file descriptor 00:29:52.965 [2024-07-15 10:03:09.659072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:52.965 [2024-07-15 10:03:09.659086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:52.965 [2024-07-15 10:03:09.659106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:52.965 [2024-07-15 10:03:09.659127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.965 [2024-07-15 10:03:09.668836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:52.965 [2024-07-15 10:03:09.669044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-07-15 10:03:09.669073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4c6c0 with addr=10.0.0.2, port=4420 00:29:52.965 [2024-07-15 10:03:09.669089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4c6c0 is same with the state(5) to be set 00:29:52.965 [2024-07-15 10:03:09.669111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4c6c0 (9): Bad file descriptor 00:29:52.965 [2024-07-15 10:03:09.669131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:52.965 [2024-07-15 10:03:09.669145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:52.965 [2024-07-15 10:03:09.669158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:52.965 [2024-07-15 10:03:09.669176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:52.965 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:52.965 [2024-07-15 10:03:09.678938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:52.965 [2024-07-15 10:03:09.679121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-07-15 10:03:09.679149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4c6c0 with addr=10.0.0.2, port=4420 00:29:52.965 [2024-07-15 10:03:09.679166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4c6c0 is same with the state(5) to be set 00:29:52.965 [2024-07-15 10:03:09.679190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4c6c0 (9): Bad file descriptor 00:29:52.965 [2024-07-15 10:03:09.679223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:52.965 [2024-07-15 10:03:09.679240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:52.965 [2024-07-15 10:03:09.679254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:52.965 [2024-07-15 10:03:09.679273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.965 [2024-07-15 10:03:09.689014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:52.965 [2024-07-15 10:03:09.689167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-07-15 10:03:09.689194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4c6c0 with addr=10.0.0.2, port=4420 00:29:52.965 [2024-07-15 10:03:09.689211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4c6c0 is same with the state(5) to be set 00:29:52.965 [2024-07-15 10:03:09.689232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4c6c0 (9): Bad file descriptor 00:29:52.965 [2024-07-15 10:03:09.689267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:52.965 [2024-07-15 10:03:09.689285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:52.965 [2024-07-15 10:03:09.689298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:52.965 [2024-07-15 10:03:09.689330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.965 [2024-07-15 10:03:09.699086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:52.965 [2024-07-15 10:03:09.699283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-07-15 10:03:09.699310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4c6c0 with addr=10.0.0.2, port=4420 00:29:52.965 [2024-07-15 10:03:09.699326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4c6c0 is same with the state(5) to be set 00:29:52.965 [2024-07-15 10:03:09.699348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4c6c0 (9): Bad file descriptor 00:29:52.965 [2024-07-15 10:03:09.699381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:52.965 [2024-07-15 10:03:09.699398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:52.965 [2024-07-15 10:03:09.699411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:52.966 [2024-07-15 10:03:09.699430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.966 [2024-07-15 10:03:09.709157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:52.966 [2024-07-15 10:03:09.709388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-07-15 10:03:09.709414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4c6c0 with addr=10.0.0.2, port=4420 00:29:52.966 [2024-07-15 10:03:09.709431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4c6c0 is same with the state(5) to be set 00:29:52.966 [2024-07-15 10:03:09.709452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4c6c0 (9): Bad file descriptor 00:29:52.966 [2024-07-15 10:03:09.709497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:52.966 [2024-07-15 10:03:09.709516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:52.966 [2024-07-15 10:03:09.709529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:52.966 [2024-07-15 10:03:09.709548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:52.966 [2024-07-15 10:03:09.716527] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:52.966 [2024-07-15 10:03:09.716559] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:52.966 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:53.224 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.225 10:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.599 [2024-07-15 10:03:10.978799] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:54.599 [2024-07-15 10:03:10.978832] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:54.599 [2024-07-15 10:03:10.978857] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:54.599 [2024-07-15 10:03:11.106301] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:54.599 [2024-07-15 10:03:11.213710] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:54.599 [2024-07-15 10:03:11.213763] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.599 request: 00:29:54.599 { 00:29:54.599 "name": "nvme", 00:29:54.599 "trtype": "tcp", 00:29:54.599 "traddr": "10.0.0.2", 00:29:54.599 "adrfam": "ipv4", 00:29:54.599 "trsvcid": "8009", 00:29:54.599 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:54.599 "wait_for_attach": true, 00:29:54.599 "method": "bdev_nvme_start_discovery", 00:29:54.599 "req_id": 1 00:29:54.599 } 00:29:54.599 Got JSON-RPC error response 00:29:54.599 response: 00:29:54.599 { 00:29:54.599 "code": -17, 00:29:54.599 "message": "File exists" 00:29:54.599 } 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:54.599 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.600 request: 00:29:54.600 { 00:29:54.600 "name": "nvme_second", 00:29:54.600 "trtype": "tcp", 00:29:54.600 "traddr": "10.0.0.2", 00:29:54.600 "adrfam": "ipv4", 00:29:54.600 "trsvcid": "8009", 00:29:54.600 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:54.600 "wait_for_attach": true, 00:29:54.600 "method": "bdev_nvme_start_discovery", 00:29:54.600 "req_id": 1 00:29:54.600 } 00:29:54.600 Got JSON-RPC error response 00:29:54.600 response: 00:29:54.600 { 00:29:54.600 "code": -17, 00:29:54.600 "message": "File exists" 00:29:54.600 } 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.600 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.858 10:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.790 [2024-07-15 10:03:12.421956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.790 [2024-07-15 10:03:12.422023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf66d50 with addr=10.0.0.2, port=8010 00:29:55.790 [2024-07-15 10:03:12.422053] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:55.790 [2024-07-15 10:03:12.422068] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:55.790 [2024-07-15 10:03:12.422082] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:56.724 [2024-07-15 10:03:13.424323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 10:03:13.424368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf66d50 with addr=10.0.0.2, port=8010 00:29:56.724 [2024-07-15 10:03:13.424394] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:56.724 [2024-07-15 10:03:13.424409] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:56.724 [2024-07-15 10:03:13.424423] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:57.656 [2024-07-15 10:03:14.426536] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:57.656 request: 00:29:57.656 { 00:29:57.656 "name": "nvme_second", 00:29:57.656 "trtype": "tcp", 00:29:57.656 "traddr": "10.0.0.2", 00:29:57.656 "adrfam": "ipv4", 00:29:57.656 "trsvcid": "8010", 00:29:57.656 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:57.656 "wait_for_attach": false, 00:29:57.656 "attach_timeout_ms": 3000, 00:29:57.656 "method": "bdev_nvme_start_discovery", 00:29:57.656 "req_id": 1 00:29:57.656 } 00:29:57.656 Got JSON-RPC error response 00:29:57.656 response: 00:29:57.656 { 00:29:57.656 "code": -110, 00:29:57.656 "message": "Connection timed out" 00:29:57.656 } 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:57.656 10:03:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2019985 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:57.914 rmmod nvme_tcp 00:29:57.914 rmmod nvme_fabrics 00:29:57.914 rmmod nvme_keyring 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2019957 ']' 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2019957 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2019957 ']' 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2019957 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2019957 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2019957' 00:29:57.914 killing process with pid 2019957 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2019957 00:29:57.914 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2019957 00:29:58.176 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:58.176 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:58.176 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:58.176 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:58.176 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:58.176 10:03:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.176 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:58.176 10:03:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.110 10:03:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:00.110 00:30:00.110 real 0m13.160s 00:30:00.110 user 0m19.005s 00:30:00.110 sys 0m2.828s 00:30:00.110 10:03:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:00.110 10:03:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.110 ************************************ 00:30:00.110 END TEST nvmf_host_discovery 00:30:00.110 ************************************ 00:30:00.110 10:03:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:00.110 10:03:16 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:00.110 10:03:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:00.110 10:03:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:00.110 10:03:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.110 ************************************ 00:30:00.110 START TEST nvmf_host_multipath_status 00:30:00.110 ************************************ 00:30:00.110 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:00.369 * Looking for test storage... 00:30:00.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:00.369 10:03:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:02.272 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:02.272 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:02.272 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.272 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:02.273 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:30:02.273 00:30:02.273 --- 10.0.0.2 ping statistics --- 00:30:02.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.273 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:30:02.273 00:30:02.273 --- 10.0.0.1 ping statistics --- 00:30:02.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.273 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2023517 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2023517 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2023517 ']' 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:02.273 10:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:02.273 [2024-07-15 10:03:18.977956] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:02.273 [2024-07-15 10:03:18.978041] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.273 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.273 [2024-07-15 10:03:19.020707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:02.273 [2024-07-15 10:03:19.048805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:02.531 [2024-07-15 10:03:19.137095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.531 [2024-07-15 10:03:19.137159] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.531 [2024-07-15 10:03:19.137187] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.531 [2024-07-15 10:03:19.137198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.531 [2024-07-15 10:03:19.137208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.531 [2024-07-15 10:03:19.137260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.531 [2024-07-15 10:03:19.137266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.531 10:03:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:02.531 10:03:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:02.531 10:03:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:02.531 10:03:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:02.531 10:03:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:02.531 10:03:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.531 10:03:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2023517 00:30:02.531 10:03:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:02.789 [2024-07-15 10:03:19.559422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.046 10:03:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:03.303 Malloc0 00:30:03.303 10:03:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:03.559 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.816 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.816 [2024-07-15 10:03:20.585966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:04.073 [2024-07-15 10:03:20.834570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2023798 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2023798 /var/tmp/bdevperf.sock 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2023798 ']' 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:04.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:04.073 10:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:04.638 10:03:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:04.638 10:03:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:04.638 10:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:04.638 10:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:05.201 Nvme0n1 00:30:05.201 10:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:05.766 Nvme0n1 00:30:05.766 10:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:05.766 10:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:07.659 10:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:07.659 10:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:07.916 10:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:08.172 10:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:09.103 10:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:09.103 10:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:09.103 10:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.103 10:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:09.361 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:09.361 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:09.361 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.361 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:09.617 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:09.617 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:09.617 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.617 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:09.875 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:09.875 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:09.875 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.875 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:10.132 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.132 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:10.132 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.132 10:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:10.389 10:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.389 10:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:10.389 10:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.389 10:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:10.647 10:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.647 10:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:10.647 10:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:10.905 10:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:11.163 10:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:12.096 10:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:12.096 10:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:12.096 10:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.096 10:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:12.354 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:12.354 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:12.354 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.354 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:12.612 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:12.612 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:12.612 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.612 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:12.870 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:12.870 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:12.870 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.870 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:13.157 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.157 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:13.157 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.157 10:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:13.415 10:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.415 10:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:13.415 10:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.415 10:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:13.673 10:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.673 10:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:13.673 10:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:13.930 10:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:14.188 10:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:15.117 10:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:15.117 10:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:15.117 10:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.117 10:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:15.375 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.375 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:15.375 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.375 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:15.632 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:15.632 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:15.632 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.632 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:15.890 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.890 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:15.890 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.890 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:16.148 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.148 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:16.148 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.148 10:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:16.406 10:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.406 10:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:16.406 10:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.406 10:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:16.671 10:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.671 10:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:16.671 10:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:16.930 10:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:17.188 10:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:18.121 10:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:18.121 10:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:18.121 10:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.121 10:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:18.380 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.380 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:18.380 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.380 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:18.638 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:18.638 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:18.638 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.638 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:18.895 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.895 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:18.895 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.895 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:19.153 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.153 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:19.153 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.153 10:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:19.411 10:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.411 10:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:19.411 10:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.411 10:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:19.669 10:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:19.669 10:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:19.669 10:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:19.927 10:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:20.185 10:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:21.118 10:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:21.118 10:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:21.118 10:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.118 10:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:21.376 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.376 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:21.376 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.376 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:21.633 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.633 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:21.633 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.633 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:21.891 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.891 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:21.891 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.891 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:22.149 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.149 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:22.149 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.149 10:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:22.407 10:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:22.407 10:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:22.407 10:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.407 10:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:22.665 10:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:22.665 10:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:22.665 10:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:22.923 10:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:23.182 10:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:24.140 10:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:24.140 10:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:24.140 10:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.140 10:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:24.398 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:24.398 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:24.398 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.398 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:24.655 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.655 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:24.655 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.655 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:24.913 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.913 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:24.913 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.913 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:25.170 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.171 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:25.171 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.171 10:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:25.428 10:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:25.428 10:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:25.428 10:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.428 10:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:25.685 10:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.685 10:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:25.942 10:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:25.942 10:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:26.200 10:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:26.483 10:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:27.417 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:27.417 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:27.417 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.417 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:27.676 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.676 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:27.676 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.676 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:27.935 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.935 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:27.935 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.935 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:28.193 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.193 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:28.193 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.193 10:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:28.451 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.451 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:28.451 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.451 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:28.709 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.709 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:28.709 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.709 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:28.967 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.967 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:28.967 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:29.225 10:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:29.483 10:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:30.418 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:30.418 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:30.418 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.418 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:30.676 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:30.676 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:30.676 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.676 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:30.934 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.934 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:30.934 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.934 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:31.192 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.192 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:31.192 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.192 10:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:31.451 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.451 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:31.451 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.451 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:31.708 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.708 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:31.708 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.708 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:31.966 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.966 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:31.966 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:32.223 10:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:32.482 10:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:33.416 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:33.416 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:33.416 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.416 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.674 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.674 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:33.674 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.674 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.932 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.932 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.932 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.932 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:34.190 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.190 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:34.190 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.190 10:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:34.448 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.448 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:34.448 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.448 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:34.706 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.706 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:34.706 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.706 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:34.965 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.965 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:34.965 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:35.223 10:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:35.482 10:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:36.856 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:36.856 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:36.856 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.856 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:36.856 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.856 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:36.856 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.856 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:37.113 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:37.113 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:37.113 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.113 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:37.371 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.371 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:37.371 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.371 10:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:37.629 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.629 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:37.629 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.629 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:37.887 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.887 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:37.887 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.887 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2023798 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2023798 ']' 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2023798 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2023798 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2023798' 00:30:38.145 killing process with pid 2023798 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2023798 00:30:38.145 10:03:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2023798 00:30:38.145 Connection closed with partial response: 00:30:38.145 00:30:38.145 00:30:38.407 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2023798 00:30:38.407 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:38.407 [2024-07-15 10:03:20.897275] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:38.407 [2024-07-15 10:03:20.897355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023798 ] 00:30:38.407 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.407 [2024-07-15 10:03:20.928481] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:38.407 [2024-07-15 10:03:20.956021] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.407 [2024-07-15 10:03:21.040331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.407 Running I/O for 90 seconds... 00:30:38.407 [2024-07-15 10:03:36.615432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.407 [2024-07-15 10:03:36.615497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:38.407 [2024-07-15 10:03:36.615572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.407 [2024-07-15 10:03:36.615595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:38.407 [2024-07-15 10:03:36.615620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.407 [2024-07-15 10:03:36.615637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:38.407 [2024-07-15 10:03:36.615660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.407 [2024-07-15 10:03:36.615677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:38.407 [2024-07-15 10:03:36.615699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.407 [2024-07-15 10:03:36.615720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:38.407 [2024-07-15 10:03:36.615770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.407 [2024-07-15 10:03:36.615794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.615835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.615852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.615874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.615901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.615924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.615941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.615962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.615979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.616975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.616998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.617975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.408 [2024-07-15 10:03:36.617991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.408 [2024-07-15 10:03:36.618015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.618967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.618991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.409 [2024-07-15 10:03:36.619927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:38.409 [2024-07-15 10:03:36.619956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.619973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.620973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.620989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.410 [2024-07-15 10:03:36.621752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:38.410 [2024-07-15 10:03:36.621780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:36.621795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:36.621824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:36.621840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.208839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.208936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.209976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.209992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.210013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.210029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.210051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.411 [2024-07-15 10:03:52.210067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.211772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:52.211805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.211836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:52.211855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.211885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:52.211904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.211936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:52.211952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.211974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:52.211990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.212013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:52.212029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.212051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:52.212067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.212089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:52.212105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.411 [2024-07-15 10:03:52.212127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.411 [2024-07-15 10:03:52.212143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.212180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.212196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.212218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.212311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.212338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.212354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.212375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.212392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.212429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.212446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.212754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.212785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.212827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.212855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.212890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.212909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.212931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.212947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.212970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.212986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.213008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.213025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.213047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.213063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.213085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.213101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.213123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.213140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.213162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.213197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:38.412 [2024-07-15 10:03:52.213221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:38.412 [2024-07-15 10:03:52.213236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:38.412 Received shutdown signal, test time was about 32.331543 seconds 00:30:38.412 00:30:38.412 Latency(us) 00:30:38.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.412 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:38.412 Verification LBA range: start 0x0 length 0x4000 00:30:38.412 Nvme0n1 : 32.33 7272.61 28.41 0.00 0.00 17570.81 307.96 4026531.84 00:30:38.412 =================================================================================================================== 00:30:38.412 Total : 7272.61 28.41 0.00 0.00 17570.81 307.96 4026531.84 00:30:38.412 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:38.670 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:38.670 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:38.670 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:38.670 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:38.670 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:30:38.670 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:38.670 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:38.671 rmmod nvme_tcp 00:30:38.671 rmmod nvme_fabrics 00:30:38.671 rmmod nvme_keyring 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2023517 ']' 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2023517 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2023517 ']' 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2023517 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2023517 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2023517' 00:30:38.671 killing process with pid 2023517 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2023517 00:30:38.671 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2023517 00:30:38.930 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:38.930 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:38.930 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:38.930 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:38.930 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:38.930 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.930 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.930 10:03:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.491 10:03:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:41.491 00:30:41.491 real 0m40.798s 00:30:41.491 user 1m59.099s 00:30:41.491 sys 0m12.041s 00:30:41.491 10:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:41.491 10:03:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:41.491 ************************************ 00:30:41.491 END TEST nvmf_host_multipath_status 00:30:41.491 ************************************ 00:30:41.491 10:03:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:41.491 10:03:57 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:41.491 10:03:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:41.491 10:03:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.491 10:03:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:41.491 ************************************ 00:30:41.491 START TEST nvmf_discovery_remove_ifc 00:30:41.491 ************************************ 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:41.491 * Looking for test storage... 00:30:41.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.491 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:30:41.492 10:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.394 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.394 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:30:43.394 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:43.394 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:43.394 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:43.394 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:43.394 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:43.394 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:30:43.394 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:43.394 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:43.395 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:43.395 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:43.395 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:43.395 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:43.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:30:43.395 00:30:43.395 --- 10.0.0.2 ping statistics --- 00:30:43.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.395 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:30:43.395 00:30:43.395 --- 10.0.0.1 ping statistics --- 00:30:43.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.395 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:43.395 10:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2029985 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2029985 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2029985 ']' 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:43.395 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.395 [2024-07-15 10:04:00.060405] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:43.395 [2024-07-15 10:04:00.060486] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.395 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.396 [2024-07-15 10:04:00.097438] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:43.396 [2024-07-15 10:04:00.124426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.654 [2024-07-15 10:04:00.208738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.654 [2024-07-15 10:04:00.208790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.654 [2024-07-15 10:04:00.208818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.654 [2024-07-15 10:04:00.208830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.654 [2024-07-15 10:04:00.208839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.654 [2024-07-15 10:04:00.208896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.654 [2024-07-15 10:04:00.361969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.654 [2024-07-15 10:04:00.370186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:43.654 null0 00:30:43.654 [2024-07-15 10:04:00.402068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2030011 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2030011 /tmp/host.sock 00:30:43.654 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2030011 ']' 00:30:43.655 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:43.655 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:43.655 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:43.655 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:43.655 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:43.655 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.913 [2024-07-15 10:04:00.467708] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:43.913 [2024-07-15 10:04:00.467786] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030011 ] 00:30:43.913 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.913 [2024-07-15 10:04:00.500344] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:43.913 [2024-07-15 10:04:00.530245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.913 [2024-07-15 10:04:00.620244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.913 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:43.913 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:30:43.913 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:43.913 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:43.913 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.913 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.913 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.913 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:43.913 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.913 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:44.172 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.172 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:44.172 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.172 10:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.103 [2024-07-15 10:04:01.799546] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:45.103 [2024-07-15 10:04:01.799582] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:45.103 [2024-07-15 10:04:01.799602] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:45.103 [2024-07-15 10:04:01.885850] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:45.360 [2024-07-15 10:04:01.991432] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:45.360 [2024-07-15 10:04:01.991488] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:45.360 [2024-07-15 10:04:01.991525] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:45.360 [2024-07-15 10:04:01.991548] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:45.360 [2024-07-15 10:04:01.991585] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:45.360 10:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.360 10:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:45.360 10:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:45.360 10:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.360 10:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.360 10:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:45.360 10:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.360 10:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:45.360 10:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:45.360 [2024-07-15 10:04:01.997985] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c32370 was disconnected and freed. delete nvme_qpair. 00:30:45.360 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:45.361 10:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:46.732 10:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:46.732 10:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.732 10:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:46.732 10:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.732 10:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.732 10:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:46.732 10:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:46.732 10:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.732 10:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:46.732 10:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:47.664 10:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:47.664 10:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.664 10:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.664 10:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:47.664 10:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:47.664 10:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:47.664 10:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:47.664 10:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.664 10:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:47.664 10:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:48.600 10:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:48.600 10:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.600 10:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:48.600 10:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.600 10:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:48.600 10:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:48.600 10:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:48.600 10:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.600 10:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:48.600 10:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:49.532 10:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:49.532 10:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:49.532 10:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.532 10:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.532 10:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:49.532 10:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:49.532 10:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:49.532 10:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.532 10:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:49.532 10:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:50.908 10:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:50.908 10:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.908 10:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:50.908 10:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.908 10:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:50.908 10:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:50.908 10:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:50.908 10:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.908 10:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:50.908 10:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:50.908 [2024-07-15 10:04:07.432672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:50.908 [2024-07-15 10:04:07.432739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.908 [2024-07-15 10:04:07.432762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.908 [2024-07-15 10:04:07.432787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.908 [2024-07-15 10:04:07.432803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.908 [2024-07-15 10:04:07.432818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.908 [2024-07-15 10:04:07.432833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.908 [2024-07-15 10:04:07.432848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.908 [2024-07-15 10:04:07.432862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.908 [2024-07-15 10:04:07.432884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.908 [2024-07-15 10:04:07.432900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.908 [2024-07-15 10:04:07.432929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8d50 is same with the state(5) to be set 00:30:50.908 [2024-07-15 10:04:07.442690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf8d50 (9): Bad file descriptor 00:30:50.908 [2024-07-15 10:04:07.452736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:51.846 10:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:51.846 10:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.846 10:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:51.846 10:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.846 10:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.846 10:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:51.846 10:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:51.846 [2024-07-15 10:04:08.466942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:51.846 [2024-07-15 10:04:08.467022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf8d50 with addr=10.0.0.2, port=4420 00:30:51.846 [2024-07-15 10:04:08.467047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf8d50 is same with the state(5) to be set 00:30:51.846 [2024-07-15 10:04:08.467092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf8d50 (9): Bad file descriptor 00:30:51.846 [2024-07-15 10:04:08.467564] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:51.846 [2024-07-15 10:04:08.467593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:51.846 [2024-07-15 10:04:08.467609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:51.846 [2024-07-15 10:04:08.467624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:51.846 [2024-07-15 10:04:08.467651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.846 [2024-07-15 10:04:08.467667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:51.846 10:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.846 10:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:51.846 10:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:52.784 [2024-07-15 10:04:09.470161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:52.784 [2024-07-15 10:04:09.470200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:52.784 [2024-07-15 10:04:09.470215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:52.784 [2024-07-15 10:04:09.470243] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:30:52.784 [2024-07-15 10:04:09.470268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.784 [2024-07-15 10:04:09.470307] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:52.784 [2024-07-15 10:04:09.470351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.784 [2024-07-15 10:04:09.470374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.784 [2024-07-15 10:04:09.470393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.784 [2024-07-15 10:04:09.470408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.784 [2024-07-15 10:04:09.470423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.784 [2024-07-15 10:04:09.470438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.784 [2024-07-15 10:04:09.470453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.784 [2024-07-15 10:04:09.470467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.784 [2024-07-15 10:04:09.470482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.784 [2024-07-15 10:04:09.470496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.784 [2024-07-15 10:04:09.470510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:52.784 [2024-07-15 10:04:09.470675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf8210 (9): Bad file descriptor 00:30:52.784 [2024-07-15 10:04:09.471695] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:52.784 [2024-07-15 10:04:09.471720] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:52.784 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:53.042 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.042 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:53.042 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.042 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.042 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:53.042 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:53.042 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.042 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:53.042 10:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:53.976 10:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:53.976 10:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.976 10:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:53.976 10:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.976 10:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.976 10:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:53.976 10:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:53.976 10:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.976 10:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:53.976 10:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:54.913 [2024-07-15 10:04:11.526054] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:54.913 [2024-07-15 10:04:11.526081] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:54.913 [2024-07-15 10:04:11.526104] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:54.913 [2024-07-15 10:04:11.612412] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:54.913 10:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:54.913 10:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.913 10:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.913 10:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:54.913 10:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:54.913 10:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:54.913 10:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:54.913 10:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.171 10:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:55.171 10:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:55.171 [2024-07-15 10:04:11.716614] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:55.171 [2024-07-15 10:04:11.716667] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:55.171 [2024-07-15 10:04:11.716704] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:55.171 [2024-07-15 10:04:11.716728] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:55.171 [2024-07-15 10:04:11.716759] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:55.171 [2024-07-15 10:04:11.724022] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c06550 was disconnected and freed. delete nvme_qpair. 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2030011 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2030011 ']' 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2030011 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2030011 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2030011' 00:30:56.134 killing process with pid 2030011 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2030011 00:30:56.134 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2030011 00:30:56.394 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:56.394 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:56.394 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:30:56.394 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:56.394 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:30:56.394 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:56.394 10:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:56.394 rmmod nvme_tcp 00:30:56.394 rmmod nvme_fabrics 00:30:56.394 rmmod nvme_keyring 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2029985 ']' 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2029985 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2029985 ']' 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2029985 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2029985 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2029985' 00:30:56.394 killing process with pid 2029985 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2029985 00:30:56.394 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2029985 00:30:56.661 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:56.662 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:56.662 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:56.662 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:56.662 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:56.662 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.662 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:56.662 10:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.569 10:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:58.570 00:30:58.570 real 0m17.597s 00:30:58.570 user 0m25.387s 00:30:58.570 sys 0m3.066s 00:30:58.570 10:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:58.570 10:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:58.570 ************************************ 00:30:58.570 END TEST nvmf_discovery_remove_ifc 00:30:58.570 ************************************ 00:30:58.828 10:04:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:58.828 10:04:15 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:58.828 10:04:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:58.828 10:04:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:58.828 10:04:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.828 ************************************ 00:30:58.828 START TEST nvmf_identify_kernel_target 00:30:58.828 ************************************ 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:58.828 * Looking for test storage... 00:30:58.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:30:58.828 10:04:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:01.358 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.358 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:01.358 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:01.358 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:01.358 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:01.358 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:01.358 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:01.358 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:01.358 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:01.358 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:01.359 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:01.359 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:01.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:01.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:01.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:31:01.359 00:31:01.359 --- 10.0.0.2 ping statistics --- 00:31:01.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.359 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:31:01.359 00:31:01.359 --- 10.0.0.1 ping statistics --- 00:31:01.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.359 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:01.359 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:01.360 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:01.360 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:01.360 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:01.360 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:01.360 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:01.360 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:01.360 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:01.360 10:04:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:02.295 Waiting for block devices as requested 00:31:02.295 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:02.295 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:02.295 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:02.553 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:02.553 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:02.553 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:02.553 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:02.813 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:02.813 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:02.813 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:02.813 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:03.071 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:03.071 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:03.071 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:03.071 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:03.330 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:03.330 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:03.330 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:03.330 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:03.330 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:03.330 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:03.330 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:03.330 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:03.330 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:03.330 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:03.330 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:03.589 No valid GPT data, bailing 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:03.589 00:31:03.589 Discovery Log Number of Records 2, Generation counter 2 00:31:03.589 =====Discovery Log Entry 0====== 00:31:03.589 trtype: tcp 00:31:03.589 adrfam: ipv4 00:31:03.589 subtype: current discovery subsystem 00:31:03.589 treq: not specified, sq flow control disable supported 00:31:03.589 portid: 1 00:31:03.589 trsvcid: 4420 00:31:03.589 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:03.589 traddr: 10.0.0.1 00:31:03.589 eflags: none 00:31:03.589 sectype: none 00:31:03.589 =====Discovery Log Entry 1====== 00:31:03.589 trtype: tcp 00:31:03.589 adrfam: ipv4 00:31:03.589 subtype: nvme subsystem 00:31:03.589 treq: not specified, sq flow control disable supported 00:31:03.589 portid: 1 00:31:03.589 trsvcid: 4420 00:31:03.589 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:03.589 traddr: 10.0.0.1 00:31:03.589 eflags: none 00:31:03.589 sectype: none 00:31:03.589 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:03.589 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:03.589 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.589 ===================================================== 00:31:03.589 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:03.589 ===================================================== 00:31:03.589 Controller Capabilities/Features 00:31:03.589 ================================ 00:31:03.589 Vendor ID: 0000 00:31:03.589 Subsystem Vendor ID: 0000 00:31:03.589 Serial Number: 57aeb1ecfd16f76fd639 00:31:03.589 Model Number: Linux 00:31:03.589 Firmware Version: 6.7.0-68 00:31:03.589 Recommended Arb Burst: 0 00:31:03.589 IEEE OUI Identifier: 00 00 00 00:31:03.589 Multi-path I/O 00:31:03.589 May have multiple subsystem ports: No 00:31:03.589 May have multiple controllers: No 00:31:03.589 Associated with SR-IOV VF: No 00:31:03.589 Max Data Transfer Size: Unlimited 00:31:03.589 Max Number of Namespaces: 0 00:31:03.589 Max Number of I/O Queues: 1024 00:31:03.589 NVMe Specification Version (VS): 1.3 00:31:03.589 NVMe Specification Version (Identify): 1.3 00:31:03.589 Maximum Queue Entries: 1024 00:31:03.589 Contiguous Queues Required: No 00:31:03.589 Arbitration Mechanisms Supported 00:31:03.589 Weighted Round Robin: Not Supported 00:31:03.589 Vendor Specific: Not Supported 00:31:03.589 Reset Timeout: 7500 ms 00:31:03.589 Doorbell Stride: 4 bytes 00:31:03.589 NVM Subsystem Reset: Not Supported 00:31:03.589 Command Sets Supported 00:31:03.589 NVM Command Set: Supported 00:31:03.589 Boot Partition: Not Supported 00:31:03.589 Memory Page Size Minimum: 4096 bytes 00:31:03.589 Memory Page Size Maximum: 4096 bytes 00:31:03.589 Persistent Memory Region: Not Supported 00:31:03.589 Optional Asynchronous Events Supported 00:31:03.589 Namespace Attribute Notices: Not Supported 00:31:03.589 Firmware Activation Notices: Not Supported 00:31:03.589 ANA Change Notices: Not Supported 00:31:03.589 PLE Aggregate Log Change Notices: Not Supported 00:31:03.589 LBA Status Info Alert Notices: Not Supported 00:31:03.589 EGE Aggregate Log Change Notices: Not Supported 00:31:03.589 Normal NVM Subsystem Shutdown event: Not Supported 00:31:03.589 Zone Descriptor Change Notices: Not Supported 00:31:03.589 Discovery Log Change Notices: Supported 00:31:03.589 Controller Attributes 00:31:03.589 128-bit Host Identifier: Not Supported 00:31:03.589 Non-Operational Permissive Mode: Not Supported 00:31:03.589 NVM Sets: Not Supported 00:31:03.589 Read Recovery Levels: Not Supported 00:31:03.589 Endurance Groups: Not Supported 00:31:03.589 Predictable Latency Mode: Not Supported 00:31:03.589 Traffic Based Keep ALive: Not Supported 00:31:03.589 Namespace Granularity: Not Supported 00:31:03.589 SQ Associations: Not Supported 00:31:03.589 UUID List: Not Supported 00:31:03.589 Multi-Domain Subsystem: Not Supported 00:31:03.589 Fixed Capacity Management: Not Supported 00:31:03.589 Variable Capacity Management: Not Supported 00:31:03.589 Delete Endurance Group: Not Supported 00:31:03.589 Delete NVM Set: Not Supported 00:31:03.589 Extended LBA Formats Supported: Not Supported 00:31:03.589 Flexible Data Placement Supported: Not Supported 00:31:03.589 00:31:03.589 Controller Memory Buffer Support 00:31:03.589 ================================ 00:31:03.589 Supported: No 00:31:03.589 00:31:03.589 Persistent Memory Region Support 00:31:03.589 ================================ 00:31:03.589 Supported: No 00:31:03.589 00:31:03.589 Admin Command Set Attributes 00:31:03.589 ============================ 00:31:03.589 Security Send/Receive: Not Supported 00:31:03.589 Format NVM: Not Supported 00:31:03.589 Firmware Activate/Download: Not Supported 00:31:03.589 Namespace Management: Not Supported 00:31:03.589 Device Self-Test: Not Supported 00:31:03.589 Directives: Not Supported 00:31:03.589 NVMe-MI: Not Supported 00:31:03.589 Virtualization Management: Not Supported 00:31:03.589 Doorbell Buffer Config: Not Supported 00:31:03.589 Get LBA Status Capability: Not Supported 00:31:03.589 Command & Feature Lockdown Capability: Not Supported 00:31:03.589 Abort Command Limit: 1 00:31:03.589 Async Event Request Limit: 1 00:31:03.589 Number of Firmware Slots: N/A 00:31:03.589 Firmware Slot 1 Read-Only: N/A 00:31:03.589 Firmware Activation Without Reset: N/A 00:31:03.589 Multiple Update Detection Support: N/A 00:31:03.589 Firmware Update Granularity: No Information Provided 00:31:03.589 Per-Namespace SMART Log: No 00:31:03.589 Asymmetric Namespace Access Log Page: Not Supported 00:31:03.589 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:03.589 Command Effects Log Page: Not Supported 00:31:03.589 Get Log Page Extended Data: Supported 00:31:03.589 Telemetry Log Pages: Not Supported 00:31:03.589 Persistent Event Log Pages: Not Supported 00:31:03.589 Supported Log Pages Log Page: May Support 00:31:03.589 Commands Supported & Effects Log Page: Not Supported 00:31:03.589 Feature Identifiers & Effects Log Page:May Support 00:31:03.589 NVMe-MI Commands & Effects Log Page: May Support 00:31:03.589 Data Area 4 for Telemetry Log: Not Supported 00:31:03.589 Error Log Page Entries Supported: 1 00:31:03.589 Keep Alive: Not Supported 00:31:03.589 00:31:03.589 NVM Command Set Attributes 00:31:03.589 ========================== 00:31:03.589 Submission Queue Entry Size 00:31:03.589 Max: 1 00:31:03.589 Min: 1 00:31:03.589 Completion Queue Entry Size 00:31:03.589 Max: 1 00:31:03.589 Min: 1 00:31:03.590 Number of Namespaces: 0 00:31:03.590 Compare Command: Not Supported 00:31:03.590 Write Uncorrectable Command: Not Supported 00:31:03.590 Dataset Management Command: Not Supported 00:31:03.590 Write Zeroes Command: Not Supported 00:31:03.590 Set Features Save Field: Not Supported 00:31:03.590 Reservations: Not Supported 00:31:03.590 Timestamp: Not Supported 00:31:03.590 Copy: Not Supported 00:31:03.590 Volatile Write Cache: Not Present 00:31:03.590 Atomic Write Unit (Normal): 1 00:31:03.590 Atomic Write Unit (PFail): 1 00:31:03.590 Atomic Compare & Write Unit: 1 00:31:03.590 Fused Compare & Write: Not Supported 00:31:03.590 Scatter-Gather List 00:31:03.590 SGL Command Set: Supported 00:31:03.590 SGL Keyed: Not Supported 00:31:03.590 SGL Bit Bucket Descriptor: Not Supported 00:31:03.590 SGL Metadata Pointer: Not Supported 00:31:03.590 Oversized SGL: Not Supported 00:31:03.590 SGL Metadata Address: Not Supported 00:31:03.590 SGL Offset: Supported 00:31:03.590 Transport SGL Data Block: Not Supported 00:31:03.590 Replay Protected Memory Block: Not Supported 00:31:03.590 00:31:03.590 Firmware Slot Information 00:31:03.590 ========================= 00:31:03.590 Active slot: 0 00:31:03.590 00:31:03.590 00:31:03.590 Error Log 00:31:03.590 ========= 00:31:03.590 00:31:03.590 Active Namespaces 00:31:03.590 ================= 00:31:03.590 Discovery Log Page 00:31:03.590 ================== 00:31:03.590 Generation Counter: 2 00:31:03.590 Number of Records: 2 00:31:03.590 Record Format: 0 00:31:03.590 00:31:03.590 Discovery Log Entry 0 00:31:03.590 ---------------------- 00:31:03.590 Transport Type: 3 (TCP) 00:31:03.590 Address Family: 1 (IPv4) 00:31:03.590 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:03.590 Entry Flags: 00:31:03.590 Duplicate Returned Information: 0 00:31:03.590 Explicit Persistent Connection Support for Discovery: 0 00:31:03.590 Transport Requirements: 00:31:03.590 Secure Channel: Not Specified 00:31:03.590 Port ID: 1 (0x0001) 00:31:03.590 Controller ID: 65535 (0xffff) 00:31:03.590 Admin Max SQ Size: 32 00:31:03.590 Transport Service Identifier: 4420 00:31:03.590 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:03.590 Transport Address: 10.0.0.1 00:31:03.590 Discovery Log Entry 1 00:31:03.590 ---------------------- 00:31:03.590 Transport Type: 3 (TCP) 00:31:03.590 Address Family: 1 (IPv4) 00:31:03.590 Subsystem Type: 2 (NVM Subsystem) 00:31:03.590 Entry Flags: 00:31:03.590 Duplicate Returned Information: 0 00:31:03.590 Explicit Persistent Connection Support for Discovery: 0 00:31:03.590 Transport Requirements: 00:31:03.590 Secure Channel: Not Specified 00:31:03.590 Port ID: 1 (0x0001) 00:31:03.590 Controller ID: 65535 (0xffff) 00:31:03.590 Admin Max SQ Size: 32 00:31:03.590 Transport Service Identifier: 4420 00:31:03.590 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:03.590 Transport Address: 10.0.0.1 00:31:03.590 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:03.850 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.850 get_feature(0x01) failed 00:31:03.850 get_feature(0x02) failed 00:31:03.850 get_feature(0x04) failed 00:31:03.850 ===================================================== 00:31:03.850 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:03.850 ===================================================== 00:31:03.850 Controller Capabilities/Features 00:31:03.850 ================================ 00:31:03.850 Vendor ID: 0000 00:31:03.850 Subsystem Vendor ID: 0000 00:31:03.850 Serial Number: 9c0d4b781b36deb2828c 00:31:03.850 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:03.850 Firmware Version: 6.7.0-68 00:31:03.850 Recommended Arb Burst: 6 00:31:03.850 IEEE OUI Identifier: 00 00 00 00:31:03.850 Multi-path I/O 00:31:03.850 May have multiple subsystem ports: Yes 00:31:03.850 May have multiple controllers: Yes 00:31:03.850 Associated with SR-IOV VF: No 00:31:03.850 Max Data Transfer Size: Unlimited 00:31:03.850 Max Number of Namespaces: 1024 00:31:03.850 Max Number of I/O Queues: 128 00:31:03.850 NVMe Specification Version (VS): 1.3 00:31:03.850 NVMe Specification Version (Identify): 1.3 00:31:03.850 Maximum Queue Entries: 1024 00:31:03.850 Contiguous Queues Required: No 00:31:03.850 Arbitration Mechanisms Supported 00:31:03.850 Weighted Round Robin: Not Supported 00:31:03.850 Vendor Specific: Not Supported 00:31:03.850 Reset Timeout: 7500 ms 00:31:03.850 Doorbell Stride: 4 bytes 00:31:03.850 NVM Subsystem Reset: Not Supported 00:31:03.850 Command Sets Supported 00:31:03.850 NVM Command Set: Supported 00:31:03.850 Boot Partition: Not Supported 00:31:03.850 Memory Page Size Minimum: 4096 bytes 00:31:03.850 Memory Page Size Maximum: 4096 bytes 00:31:03.850 Persistent Memory Region: Not Supported 00:31:03.850 Optional Asynchronous Events Supported 00:31:03.850 Namespace Attribute Notices: Supported 00:31:03.850 Firmware Activation Notices: Not Supported 00:31:03.850 ANA Change Notices: Supported 00:31:03.850 PLE Aggregate Log Change Notices: Not Supported 00:31:03.850 LBA Status Info Alert Notices: Not Supported 00:31:03.850 EGE Aggregate Log Change Notices: Not Supported 00:31:03.850 Normal NVM Subsystem Shutdown event: Not Supported 00:31:03.850 Zone Descriptor Change Notices: Not Supported 00:31:03.850 Discovery Log Change Notices: Not Supported 00:31:03.850 Controller Attributes 00:31:03.850 128-bit Host Identifier: Supported 00:31:03.850 Non-Operational Permissive Mode: Not Supported 00:31:03.850 NVM Sets: Not Supported 00:31:03.850 Read Recovery Levels: Not Supported 00:31:03.850 Endurance Groups: Not Supported 00:31:03.850 Predictable Latency Mode: Not Supported 00:31:03.850 Traffic Based Keep ALive: Supported 00:31:03.850 Namespace Granularity: Not Supported 00:31:03.850 SQ Associations: Not Supported 00:31:03.850 UUID List: Not Supported 00:31:03.850 Multi-Domain Subsystem: Not Supported 00:31:03.850 Fixed Capacity Management: Not Supported 00:31:03.850 Variable Capacity Management: Not Supported 00:31:03.850 Delete Endurance Group: Not Supported 00:31:03.850 Delete NVM Set: Not Supported 00:31:03.850 Extended LBA Formats Supported: Not Supported 00:31:03.850 Flexible Data Placement Supported: Not Supported 00:31:03.850 00:31:03.850 Controller Memory Buffer Support 00:31:03.850 ================================ 00:31:03.850 Supported: No 00:31:03.850 00:31:03.850 Persistent Memory Region Support 00:31:03.850 ================================ 00:31:03.850 Supported: No 00:31:03.850 00:31:03.850 Admin Command Set Attributes 00:31:03.850 ============================ 00:31:03.850 Security Send/Receive: Not Supported 00:31:03.850 Format NVM: Not Supported 00:31:03.850 Firmware Activate/Download: Not Supported 00:31:03.850 Namespace Management: Not Supported 00:31:03.850 Device Self-Test: Not Supported 00:31:03.850 Directives: Not Supported 00:31:03.850 NVMe-MI: Not Supported 00:31:03.850 Virtualization Management: Not Supported 00:31:03.850 Doorbell Buffer Config: Not Supported 00:31:03.850 Get LBA Status Capability: Not Supported 00:31:03.850 Command & Feature Lockdown Capability: Not Supported 00:31:03.850 Abort Command Limit: 4 00:31:03.850 Async Event Request Limit: 4 00:31:03.850 Number of Firmware Slots: N/A 00:31:03.850 Firmware Slot 1 Read-Only: N/A 00:31:03.850 Firmware Activation Without Reset: N/A 00:31:03.850 Multiple Update Detection Support: N/A 00:31:03.850 Firmware Update Granularity: No Information Provided 00:31:03.850 Per-Namespace SMART Log: Yes 00:31:03.850 Asymmetric Namespace Access Log Page: Supported 00:31:03.850 ANA Transition Time : 10 sec 00:31:03.850 00:31:03.850 Asymmetric Namespace Access Capabilities 00:31:03.850 ANA Optimized State : Supported 00:31:03.850 ANA Non-Optimized State : Supported 00:31:03.850 ANA Inaccessible State : Supported 00:31:03.850 ANA Persistent Loss State : Supported 00:31:03.850 ANA Change State : Supported 00:31:03.850 ANAGRPID is not changed : No 00:31:03.850 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:03.850 00:31:03.850 ANA Group Identifier Maximum : 128 00:31:03.850 Number of ANA Group Identifiers : 128 00:31:03.850 Max Number of Allowed Namespaces : 1024 00:31:03.850 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:03.850 Command Effects Log Page: Supported 00:31:03.850 Get Log Page Extended Data: Supported 00:31:03.850 Telemetry Log Pages: Not Supported 00:31:03.850 Persistent Event Log Pages: Not Supported 00:31:03.850 Supported Log Pages Log Page: May Support 00:31:03.850 Commands Supported & Effects Log Page: Not Supported 00:31:03.850 Feature Identifiers & Effects Log Page:May Support 00:31:03.850 NVMe-MI Commands & Effects Log Page: May Support 00:31:03.850 Data Area 4 for Telemetry Log: Not Supported 00:31:03.850 Error Log Page Entries Supported: 128 00:31:03.850 Keep Alive: Supported 00:31:03.850 Keep Alive Granularity: 1000 ms 00:31:03.850 00:31:03.850 NVM Command Set Attributes 00:31:03.850 ========================== 00:31:03.850 Submission Queue Entry Size 00:31:03.850 Max: 64 00:31:03.850 Min: 64 00:31:03.850 Completion Queue Entry Size 00:31:03.850 Max: 16 00:31:03.850 Min: 16 00:31:03.850 Number of Namespaces: 1024 00:31:03.850 Compare Command: Not Supported 00:31:03.850 Write Uncorrectable Command: Not Supported 00:31:03.850 Dataset Management Command: Supported 00:31:03.850 Write Zeroes Command: Supported 00:31:03.850 Set Features Save Field: Not Supported 00:31:03.850 Reservations: Not Supported 00:31:03.850 Timestamp: Not Supported 00:31:03.850 Copy: Not Supported 00:31:03.850 Volatile Write Cache: Present 00:31:03.850 Atomic Write Unit (Normal): 1 00:31:03.850 Atomic Write Unit (PFail): 1 00:31:03.850 Atomic Compare & Write Unit: 1 00:31:03.850 Fused Compare & Write: Not Supported 00:31:03.850 Scatter-Gather List 00:31:03.850 SGL Command Set: Supported 00:31:03.850 SGL Keyed: Not Supported 00:31:03.850 SGL Bit Bucket Descriptor: Not Supported 00:31:03.850 SGL Metadata Pointer: Not Supported 00:31:03.850 Oversized SGL: Not Supported 00:31:03.850 SGL Metadata Address: Not Supported 00:31:03.850 SGL Offset: Supported 00:31:03.850 Transport SGL Data Block: Not Supported 00:31:03.850 Replay Protected Memory Block: Not Supported 00:31:03.850 00:31:03.850 Firmware Slot Information 00:31:03.850 ========================= 00:31:03.850 Active slot: 0 00:31:03.850 00:31:03.850 Asymmetric Namespace Access 00:31:03.850 =========================== 00:31:03.850 Change Count : 0 00:31:03.850 Number of ANA Group Descriptors : 1 00:31:03.850 ANA Group Descriptor : 0 00:31:03.850 ANA Group ID : 1 00:31:03.850 Number of NSID Values : 1 00:31:03.850 Change Count : 0 00:31:03.850 ANA State : 1 00:31:03.850 Namespace Identifier : 1 00:31:03.850 00:31:03.850 Commands Supported and Effects 00:31:03.850 ============================== 00:31:03.850 Admin Commands 00:31:03.850 -------------- 00:31:03.850 Get Log Page (02h): Supported 00:31:03.850 Identify (06h): Supported 00:31:03.850 Abort (08h): Supported 00:31:03.851 Set Features (09h): Supported 00:31:03.851 Get Features (0Ah): Supported 00:31:03.851 Asynchronous Event Request (0Ch): Supported 00:31:03.851 Keep Alive (18h): Supported 00:31:03.851 I/O Commands 00:31:03.851 ------------ 00:31:03.851 Flush (00h): Supported 00:31:03.851 Write (01h): Supported LBA-Change 00:31:03.851 Read (02h): Supported 00:31:03.851 Write Zeroes (08h): Supported LBA-Change 00:31:03.851 Dataset Management (09h): Supported 00:31:03.851 00:31:03.851 Error Log 00:31:03.851 ========= 00:31:03.851 Entry: 0 00:31:03.851 Error Count: 0x3 00:31:03.851 Submission Queue Id: 0x0 00:31:03.851 Command Id: 0x5 00:31:03.851 Phase Bit: 0 00:31:03.851 Status Code: 0x2 00:31:03.851 Status Code Type: 0x0 00:31:03.851 Do Not Retry: 1 00:31:03.851 Error Location: 0x28 00:31:03.851 LBA: 0x0 00:31:03.851 Namespace: 0x0 00:31:03.851 Vendor Log Page: 0x0 00:31:03.851 ----------- 00:31:03.851 Entry: 1 00:31:03.851 Error Count: 0x2 00:31:03.851 Submission Queue Id: 0x0 00:31:03.851 Command Id: 0x5 00:31:03.851 Phase Bit: 0 00:31:03.851 Status Code: 0x2 00:31:03.851 Status Code Type: 0x0 00:31:03.851 Do Not Retry: 1 00:31:03.851 Error Location: 0x28 00:31:03.851 LBA: 0x0 00:31:03.851 Namespace: 0x0 00:31:03.851 Vendor Log Page: 0x0 00:31:03.851 ----------- 00:31:03.851 Entry: 2 00:31:03.851 Error Count: 0x1 00:31:03.851 Submission Queue Id: 0x0 00:31:03.851 Command Id: 0x4 00:31:03.851 Phase Bit: 0 00:31:03.851 Status Code: 0x2 00:31:03.851 Status Code Type: 0x0 00:31:03.851 Do Not Retry: 1 00:31:03.851 Error Location: 0x28 00:31:03.851 LBA: 0x0 00:31:03.851 Namespace: 0x0 00:31:03.851 Vendor Log Page: 0x0 00:31:03.851 00:31:03.851 Number of Queues 00:31:03.851 ================ 00:31:03.851 Number of I/O Submission Queues: 128 00:31:03.851 Number of I/O Completion Queues: 128 00:31:03.851 00:31:03.851 ZNS Specific Controller Data 00:31:03.851 ============================ 00:31:03.851 Zone Append Size Limit: 0 00:31:03.851 00:31:03.851 00:31:03.851 Active Namespaces 00:31:03.851 ================= 00:31:03.851 get_feature(0x05) failed 00:31:03.851 Namespace ID:1 00:31:03.851 Command Set Identifier: NVM (00h) 00:31:03.851 Deallocate: Supported 00:31:03.851 Deallocated/Unwritten Error: Not Supported 00:31:03.851 Deallocated Read Value: Unknown 00:31:03.851 Deallocate in Write Zeroes: Not Supported 00:31:03.851 Deallocated Guard Field: 0xFFFF 00:31:03.851 Flush: Supported 00:31:03.851 Reservation: Not Supported 00:31:03.851 Namespace Sharing Capabilities: Multiple Controllers 00:31:03.851 Size (in LBAs): 1953525168 (931GiB) 00:31:03.851 Capacity (in LBAs): 1953525168 (931GiB) 00:31:03.851 Utilization (in LBAs): 1953525168 (931GiB) 00:31:03.851 UUID: 942a7ac0-7916-434b-abb3-4147a03b73b2 00:31:03.851 Thin Provisioning: Not Supported 00:31:03.851 Per-NS Atomic Units: Yes 00:31:03.851 Atomic Boundary Size (Normal): 0 00:31:03.851 Atomic Boundary Size (PFail): 0 00:31:03.851 Atomic Boundary Offset: 0 00:31:03.851 NGUID/EUI64 Never Reused: No 00:31:03.851 ANA group ID: 1 00:31:03.851 Namespace Write Protected: No 00:31:03.851 Number of LBA Formats: 1 00:31:03.851 Current LBA Format: LBA Format #00 00:31:03.851 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:03.851 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:03.851 rmmod nvme_tcp 00:31:03.851 rmmod nvme_fabrics 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:03.851 10:04:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:06.382 10:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:06.952 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:06.952 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:06.952 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:06.952 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:06.952 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:06.952 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:06.952 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:07.211 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:07.211 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:07.211 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:07.211 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:07.211 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:07.211 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:07.211 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:07.211 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:07.211 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:08.148 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:08.148 00:31:08.148 real 0m9.444s 00:31:08.148 user 0m2.016s 00:31:08.148 sys 0m3.411s 00:31:08.148 10:04:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:08.148 10:04:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:08.148 ************************************ 00:31:08.148 END TEST nvmf_identify_kernel_target 00:31:08.148 ************************************ 00:31:08.148 10:04:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:08.148 10:04:24 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:08.148 10:04:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:08.148 10:04:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:08.148 10:04:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.148 ************************************ 00:31:08.148 START TEST nvmf_auth_host 00:31:08.148 ************************************ 00:31:08.148 10:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:08.408 * Looking for test storage... 00:31:08.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:08.408 10:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:08.409 10:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.312 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.312 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:10.312 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:10.312 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:10.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:10.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:10.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:10.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:10.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:31:10.313 00:31:10.313 --- 10.0.0.2 ping statistics --- 00:31:10.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.313 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:31:10.313 00:31:10.313 --- 10.0.0.1 ping statistics --- 00:31:10.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.313 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:10.313 10:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2037027 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2037027 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2037027 ']' 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:10.313 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.572 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:10.572 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:10.572 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:10.572 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:10.572 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.572 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.572 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=07f0939be1038dcfb7fd396c7b638069 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.mfH 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 07f0939be1038dcfb7fd396c7b638069 0 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 07f0939be1038dcfb7fd396c7b638069 0 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=07f0939be1038dcfb7fd396c7b638069 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.mfH 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.mfH 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.mfH 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=33f50633bda984b06d126a0400883844500d21d04a7de4fe4b0f9b055d5d139b 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7N6 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 33f50633bda984b06d126a0400883844500d21d04a7de4fe4b0f9b055d5d139b 3 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 33f50633bda984b06d126a0400883844500d21d04a7de4fe4b0f9b055d5d139b 3 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=33f50633bda984b06d126a0400883844500d21d04a7de4fe4b0f9b055d5d139b 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7N6 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7N6 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7N6 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=663d88bdad41568a8f1d346061027cf72c52353596a32a1b 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DnO 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 663d88bdad41568a8f1d346061027cf72c52353596a32a1b 0 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 663d88bdad41568a8f1d346061027cf72c52353596a32a1b 0 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=663d88bdad41568a8f1d346061027cf72c52353596a32a1b 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DnO 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DnO 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.DnO 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bbe3a7e09c15cad08b7c692dd03b8555d9de80a3544ea931 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Bch 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bbe3a7e09c15cad08b7c692dd03b8555d9de80a3544ea931 2 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bbe3a7e09c15cad08b7c692dd03b8555d9de80a3544ea931 2 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bbe3a7e09c15cad08b7c692dd03b8555d9de80a3544ea931 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Bch 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Bch 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Bch 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=80bb142bd9d89d0d708f5398fd7799f4 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.AFg 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 80bb142bd9d89d0d708f5398fd7799f4 1 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 80bb142bd9d89d0d708f5398fd7799f4 1 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=80bb142bd9d89d0d708f5398fd7799f4 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:10.831 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.AFg 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.AFg 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.AFg 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af95c52de404562bdb74f7dfb7b23f4a 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cci 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af95c52de404562bdb74f7dfb7b23f4a 1 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af95c52de404562bdb74f7dfb7b23f4a 1 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af95c52de404562bdb74f7dfb7b23f4a 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cci 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cci 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.cci 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6eecab6259d6061466036840c271d15b41b6f388005a1ebb 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HCd 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6eecab6259d6061466036840c271d15b41b6f388005a1ebb 2 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6eecab6259d6061466036840c271d15b41b6f388005a1ebb 2 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6eecab6259d6061466036840c271d15b41b6f388005a1ebb 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HCd 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HCd 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.HCd 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fceeda3f000d2bcec52d378eab92ae25 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.JPL 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fceeda3f000d2bcec52d378eab92ae25 0 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fceeda3f000d2bcec52d378eab92ae25 0 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fceeda3f000d2bcec52d378eab92ae25 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.JPL 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.JPL 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.JPL 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c146f9f4f7170b228675f427a666d1d86e8af7f00c16bd85ff68b53bc98decf6 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lKZ 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c146f9f4f7170b228675f427a666d1d86e8af7f00c16bd85ff68b53bc98decf6 3 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c146f9f4f7170b228675f427a666d1d86e8af7f00c16bd85ff68b53bc98decf6 3 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c146f9f4f7170b228675f427a666d1d86e8af7f00c16bd85ff68b53bc98decf6 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lKZ 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lKZ 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lKZ 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2037027 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2037027 ']' 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:11.090 10:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mfH 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7N6 ]] 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7N6 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.DnO 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Bch ]] 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bch 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.AFg 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.cci ]] 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cci 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.HCd 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.349 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.JPL ]] 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.JPL 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lKZ 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:11.608 10:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:12.568 Waiting for block devices as requested 00:31:12.568 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:12.826 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:12.826 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:13.083 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:13.083 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:13.083 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:13.083 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:13.341 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:13.341 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:13.341 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:13.341 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:13.599 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:13.599 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:13.599 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:13.599 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:13.857 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:13.857 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:14.115 No valid GPT data, bailing 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:14.115 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:14.373 00:31:14.373 Discovery Log Number of Records 2, Generation counter 2 00:31:14.373 =====Discovery Log Entry 0====== 00:31:14.373 trtype: tcp 00:31:14.373 adrfam: ipv4 00:31:14.373 subtype: current discovery subsystem 00:31:14.373 treq: not specified, sq flow control disable supported 00:31:14.373 portid: 1 00:31:14.373 trsvcid: 4420 00:31:14.373 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:14.373 traddr: 10.0.0.1 00:31:14.373 eflags: none 00:31:14.373 sectype: none 00:31:14.373 =====Discovery Log Entry 1====== 00:31:14.373 trtype: tcp 00:31:14.373 adrfam: ipv4 00:31:14.373 subtype: nvme subsystem 00:31:14.373 treq: not specified, sq flow control disable supported 00:31:14.373 portid: 1 00:31:14.373 trsvcid: 4420 00:31:14.373 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:14.373 traddr: 10.0.0.1 00:31:14.373 eflags: none 00:31:14.373 sectype: none 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.373 10:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.373 nvme0n1 00:31:14.373 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.373 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.373 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.373 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.373 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.373 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.632 nvme0n1 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.632 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.890 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.890 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.890 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:14.890 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:14.890 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.891 nvme0n1 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.891 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.149 nvme0n1 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.150 10:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.408 nvme0n1 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.408 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.409 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.667 nvme0n1 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:15.667 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:15.668 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.668 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.668 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.926 nvme0n1 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:15.926 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.927 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.186 nvme0n1 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.186 10:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.445 nvme0n1 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.446 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.703 nvme0n1 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.703 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.960 nvme0n1 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.960 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.216 nvme0n1 00:31:17.216 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.216 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.216 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.216 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.216 10:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.216 10:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.473 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.730 nvme0n1 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.730 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.988 nvme0n1 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.988 10:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.246 nvme0n1 00:31:18.246 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.246 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.246 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.246 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.246 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.503 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.761 nvme0n1 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.761 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.324 nvme0n1 00:31:19.324 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.324 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.324 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.324 10:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.324 10:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.324 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.889 nvme0n1 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.889 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.147 10:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.713 nvme0n1 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.713 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.278 nvme0n1 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.278 10:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.842 nvme0n1 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:21.842 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.843 10:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.776 nvme0n1 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.776 10:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.150 nvme0n1 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:24.150 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.151 10:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.716 nvme0n1 00:31:24.716 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.716 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.716 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.716 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.716 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.716 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.716 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.716 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.716 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.716 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:24.973 10:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:24.974 10:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:24.974 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.974 10:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.933 nvme0n1 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.933 10:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.868 nvme0n1 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.868 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.869 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.128 nvme0n1 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.128 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.386 nvme0n1 00:31:27.386 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.386 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.386 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.387 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.387 10:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.387 10:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.387 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.646 nvme0n1 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.646 nvme0n1 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.646 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.904 nvme0n1 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.904 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.905 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.905 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.905 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.163 nvme0n1 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.163 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.421 10:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.421 nvme0n1 00:31:28.421 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.421 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.422 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.422 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.422 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.422 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.680 nvme0n1 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.680 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.939 nvme0n1 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.939 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.197 nvme0n1 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.197 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.457 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.457 10:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.457 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.457 10:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.457 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.717 nvme0n1 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.717 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.718 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.718 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.718 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.718 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.718 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.718 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:29.718 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.718 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.977 nvme0n1 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.977 10:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.542 nvme0n1 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.542 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.543 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.801 nvme0n1 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.801 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.059 nvme0n1 00:31:31.059 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.059 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.059 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.059 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.059 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.059 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.059 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.060 10:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.626 nvme0n1 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.626 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.193 nvme0n1 00:31:32.193 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.193 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.193 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.193 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.194 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.194 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.451 10:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.451 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.016 nvme0n1 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.016 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.017 10:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.585 nvme0n1 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.585 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.161 nvme0n1 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.162 10:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.094 nvme0n1 00:31:35.094 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.094 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.094 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.094 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.094 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.094 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.353 10:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.290 nvme0n1 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:36.290 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.291 10:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.222 nvme0n1 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:37.222 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.223 10:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.599 nvme0n1 00:31:38.599 10:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.599 10:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.599 10:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.600 10:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.600 10:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.600 10:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.600 10:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.600 10:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.600 10:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.600 10:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.600 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.212 nvme0n1 00:31:39.212 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.212 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.212 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.212 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.212 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.212 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.472 10:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.472 nvme0n1 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.472 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.732 nvme0n1 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.732 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.992 nvme0n1 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.992 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.993 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.253 nvme0n1 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.253 10:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.514 nvme0n1 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.514 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.775 nvme0n1 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.775 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.035 nvme0n1 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.035 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.293 nvme0n1 00:31:41.293 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.293 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.293 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.293 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.293 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.293 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.294 10:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.552 nvme0n1 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.552 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.811 nvme0n1 00:31:41.811 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.811 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.812 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.073 nvme0n1 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.073 10:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.330 nvme0n1 00:31:42.330 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.330 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.330 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.330 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.330 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.330 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.637 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.896 nvme0n1 00:31:42.896 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.896 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.896 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.897 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.154 nvme0n1 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.154 10:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.411 nvme0n1 00:31:43.411 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.411 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.411 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.411 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.411 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.667 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.668 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.233 nvme0n1 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.233 10:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.802 nvme0n1 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.802 10:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.370 nvme0n1 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.370 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.371 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.939 nvme0n1 00:31:45.939 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.940 10:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.508 nvme0n1 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdmMDkzOWJlMTAzOGRjZmI3ZmQzOTZjN2I2MzgwNjmygBOy: 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: ]] 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzNmNTA2MzNiZGE5ODRiMDZkMTI2YTA0MDA4ODM4NDQ1MDBkMjFkMDRhN2RlNGZlNGIwZjliMDU1ZDVkMTM5YihZVnY=: 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.508 10:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.881 nvme0n1 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.881 10:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.882 10:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.882 10:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.882 10:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:47.882 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.882 10:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.819 nvme0n1 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBiYjE0MmJkOWQ4OWQwZDcwOGY1Mzk4ZmQ3Nzk5ZjSwRm73: 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: ]] 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY5NWM1MmRlNDA0NTYyYmRiNzRmN2RmYjdiMjNmNGHCFFQJ: 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:48.819 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.820 10:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.759 nvme0n1 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVlY2FiNjI1OWQ2MDYxNDY2MDM2ODQwYzI3MWQxNWI0MWI2ZjM4ODAwNWExZWJi0ZW9WA==: 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: ]] 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmNlZWRhM2YwMDBkMmJjZWM1MmQzNzhlYWI5MmFlMjWb67mj: 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.759 10:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.698 nvme0n1 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NmY5ZjRmNzE3MGIyMjg2NzVmNDI3YTY2NmQxZDg2ZThhZjdmMDBjMTZiZDg1ZmY2OGI1M2JjOThkZWNmNjJHJZ0=: 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.698 10:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.078 nvme0n1 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjYzZDg4YmRhZDQxNTY4YThmMWQzNDYwNjEwMjdjZjcyYzUyMzUzNTk2YTMyYTFiDrjTWQ==: 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlM2E3ZTA5YzE1Y2FkMDhiN2M2OTJkZDAzYjg1NTVkOWRlODBhMzU0NGVhOTMxVRNDpg==: 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.078 request: 00:31:52.078 { 00:31:52.078 "name": "nvme0", 00:31:52.078 "trtype": "tcp", 00:31:52.078 "traddr": "10.0.0.1", 00:31:52.078 "adrfam": "ipv4", 00:31:52.078 "trsvcid": "4420", 00:31:52.078 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:52.078 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:52.078 "prchk_reftag": false, 00:31:52.078 "prchk_guard": false, 00:31:52.078 "hdgst": false, 00:31:52.078 "ddgst": false, 00:31:52.078 "method": "bdev_nvme_attach_controller", 00:31:52.078 "req_id": 1 00:31:52.078 } 00:31:52.078 Got JSON-RPC error response 00:31:52.078 response: 00:31:52.078 { 00:31:52.078 "code": -5, 00:31:52.078 "message": "Input/output error" 00:31:52.078 } 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.078 request: 00:31:52.078 { 00:31:52.078 "name": "nvme0", 00:31:52.078 "trtype": "tcp", 00:31:52.078 "traddr": "10.0.0.1", 00:31:52.078 "adrfam": "ipv4", 00:31:52.078 "trsvcid": "4420", 00:31:52.078 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:52.078 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:52.078 "prchk_reftag": false, 00:31:52.078 "prchk_guard": false, 00:31:52.078 "hdgst": false, 00:31:52.078 "ddgst": false, 00:31:52.078 "dhchap_key": "key2", 00:31:52.078 "method": "bdev_nvme_attach_controller", 00:31:52.078 "req_id": 1 00:31:52.078 } 00:31:52.078 Got JSON-RPC error response 00:31:52.078 response: 00:31:52.078 { 00:31:52.078 "code": -5, 00:31:52.078 "message": "Input/output error" 00:31:52.078 } 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.078 request: 00:31:52.078 { 00:31:52.078 "name": "nvme0", 00:31:52.078 "trtype": "tcp", 00:31:52.078 "traddr": "10.0.0.1", 00:31:52.078 "adrfam": "ipv4", 00:31:52.078 "trsvcid": "4420", 00:31:52.078 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:52.078 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:52.078 "prchk_reftag": false, 00:31:52.078 "prchk_guard": false, 00:31:52.078 "hdgst": false, 00:31:52.078 "ddgst": false, 00:31:52.078 "dhchap_key": "key1", 00:31:52.078 "dhchap_ctrlr_key": "ckey2", 00:31:52.078 "method": "bdev_nvme_attach_controller", 00:31:52.078 "req_id": 1 00:31:52.078 } 00:31:52.078 Got JSON-RPC error response 00:31:52.078 response: 00:31:52.078 { 00:31:52.078 "code": -5, 00:31:52.078 "message": "Input/output error" 00:31:52.078 } 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:52.078 rmmod nvme_tcp 00:31:52.078 rmmod nvme_fabrics 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2037027 ']' 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2037027 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2037027 ']' 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2037027 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:52.078 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2037027 00:31:52.337 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:52.337 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:52.337 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2037027' 00:31:52.337 killing process with pid 2037027 00:31:52.337 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2037027 00:31:52.337 10:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2037027 00:31:52.337 10:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:52.337 10:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:52.337 10:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:52.337 10:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:52.337 10:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:52.337 10:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.337 10:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:52.337 10:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:54.925 10:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:55.861 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:55.861 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:55.861 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:55.861 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:55.861 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:55.861 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:55.861 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:55.861 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:55.861 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:55.861 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:55.861 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:55.861 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:55.861 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:55.861 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:55.861 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:55.861 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:56.796 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:57.053 10:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.mfH /tmp/spdk.key-null.DnO /tmp/spdk.key-sha256.AFg /tmp/spdk.key-sha384.HCd /tmp/spdk.key-sha512.lKZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:31:57.053 10:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:57.993 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:57.993 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:57.993 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:57.993 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:57.993 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:57.993 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:57.993 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:57.993 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:57.993 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:57.993 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:57.993 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:57.993 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:57.993 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:57.993 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:57.993 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:57.993 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:57.993 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:58.252 00:31:58.252 real 0m50.004s 00:31:58.252 user 0m47.716s 00:31:58.252 sys 0m5.791s 00:31:58.252 10:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:58.252 10:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.252 ************************************ 00:31:58.252 END TEST nvmf_auth_host 00:31:58.252 ************************************ 00:31:58.252 10:05:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:58.252 10:05:14 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:31:58.252 10:05:14 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:58.252 10:05:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:58.252 10:05:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.252 10:05:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.252 ************************************ 00:31:58.252 START TEST nvmf_digest 00:31:58.252 ************************************ 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:58.252 * Looking for test storage... 00:31:58.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.252 10:05:14 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.252 10:05:15 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:31:58.253 10:05:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:00.160 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:00.160 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:00.160 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:00.160 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:00.160 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.421 10:05:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:00.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:32:00.421 00:32:00.421 --- 10.0.0.2 ping statistics --- 00:32:00.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.421 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:32:00.421 00:32:00.421 --- 10.0.0.1 ping statistics --- 00:32:00.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.421 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:00.421 ************************************ 00:32:00.421 START TEST nvmf_digest_clean 00:32:00.421 ************************************ 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2046538 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2046538 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2046538 ']' 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:00.421 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:00.421 [2024-07-15 10:05:17.177734] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:00.421 [2024-07-15 10:05:17.177815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.680 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.680 [2024-07-15 10:05:17.217960] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:00.680 [2024-07-15 10:05:17.244460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.680 [2024-07-15 10:05:17.333042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.680 [2024-07-15 10:05:17.333105] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.680 [2024-07-15 10:05:17.333118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.680 [2024-07-15 10:05:17.333130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.680 [2024-07-15 10:05:17.333140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.680 [2024-07-15 10:05:17.333197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.680 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:00.939 null0 00:32:00.939 [2024-07-15 10:05:17.516887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.939 [2024-07-15 10:05:17.541112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2046562 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2046562 /var/tmp/bperf.sock 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2046562 ']' 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:00.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:00.939 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:00.939 [2024-07-15 10:05:17.589272] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:00.939 [2024-07-15 10:05:17.589348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046562 ] 00:32:00.939 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.940 [2024-07-15 10:05:17.622033] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:00.940 [2024-07-15 10:05:17.652058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.198 [2024-07-15 10:05:17.742813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.198 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:01.198 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:01.198 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:01.198 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:01.198 10:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:01.456 10:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:01.456 10:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:02.027 nvme0n1 00:32:02.027 10:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:02.027 10:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:02.027 Running I/O for 2 seconds... 00:32:03.930 00:32:03.930 Latency(us) 00:32:03.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.930 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:03.930 nvme0n1 : 2.00 18573.86 72.55 0.00 0.00 6882.87 3616.62 15728.64 00:32:03.930 =================================================================================================================== 00:32:03.930 Total : 18573.86 72.55 0.00 0.00 6882.87 3616.62 15728.64 00:32:03.930 0 00:32:03.930 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:03.930 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:03.930 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:03.930 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:03.930 | select(.opcode=="crc32c") 00:32:03.930 | "\(.module_name) \(.executed)"' 00:32:03.930 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2046562 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2046562 ']' 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2046562 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2046562 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2046562' 00:32:04.189 killing process with pid 2046562 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2046562 00:32:04.189 Received shutdown signal, test time was about 2.000000 seconds 00:32:04.189 00:32:04.189 Latency(us) 00:32:04.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.189 =================================================================================================================== 00:32:04.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:04.189 10:05:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2046562 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2046968 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2046968 /var/tmp/bperf.sock 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2046968 ']' 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:04.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:04.447 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:04.706 [2024-07-15 10:05:21.238076] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:04.706 [2024-07-15 10:05:21.238158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046968 ] 00:32:04.706 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:04.706 Zero copy mechanism will not be used. 00:32:04.706 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.706 [2024-07-15 10:05:21.269788] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:04.706 [2024-07-15 10:05:21.302630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.707 [2024-07-15 10:05:21.393502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.707 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:04.707 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:04.707 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:04.707 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:04.707 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:05.275 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:05.275 10:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:05.536 nvme0n1 00:32:05.536 10:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:05.536 10:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:05.536 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:05.536 Zero copy mechanism will not be used. 00:32:05.536 Running I/O for 2 seconds... 00:32:07.440 00:32:07.440 Latency(us) 00:32:07.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.440 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:07.440 nvme0n1 : 2.00 3228.62 403.58 0.00 0.00 4951.49 1365.33 13010.11 00:32:07.440 =================================================================================================================== 00:32:07.440 Total : 3228.62 403.58 0.00 0.00 4951.49 1365.33 13010.11 00:32:07.440 0 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:07.699 | select(.opcode=="crc32c") 00:32:07.699 | "\(.module_name) \(.executed)"' 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2046968 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2046968 ']' 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2046968 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:07.699 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2046968 00:32:07.957 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:07.958 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:07.958 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2046968' 00:32:07.958 killing process with pid 2046968 00:32:07.958 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2046968 00:32:07.958 Received shutdown signal, test time was about 2.000000 seconds 00:32:07.958 00:32:07.958 Latency(us) 00:32:07.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.958 =================================================================================================================== 00:32:07.958 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.958 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2046968 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2047377 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2047377 /var/tmp/bperf.sock 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2047377 ']' 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:08.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:08.217 10:05:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:08.217 [2024-07-15 10:05:24.793292] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:08.217 [2024-07-15 10:05:24.793378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047377 ] 00:32:08.217 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.217 [2024-07-15 10:05:24.825237] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:08.217 [2024-07-15 10:05:24.858063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.217 [2024-07-15 10:05:24.954259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.475 10:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:08.475 10:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:08.475 10:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:08.475 10:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:08.475 10:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:08.732 10:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:08.732 10:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:08.991 nvme0n1 00:32:08.991 10:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:08.991 10:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:08.991 Running I/O for 2 seconds... 00:32:11.574 00:32:11.574 Latency(us) 00:32:11.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.574 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.574 nvme0n1 : 2.00 20550.98 80.28 0.00 0.00 6217.67 3495.25 16311.18 00:32:11.574 =================================================================================================================== 00:32:11.574 Total : 20550.98 80.28 0.00 0.00 6217.67 3495.25 16311.18 00:32:11.574 0 00:32:11.574 10:05:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:11.574 10:05:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:11.574 10:05:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:11.574 10:05:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:11.574 | select(.opcode=="crc32c") 00:32:11.574 | "\(.module_name) \(.executed)"' 00:32:11.574 10:05:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2047377 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2047377 ']' 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2047377 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2047377 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2047377' 00:32:11.574 killing process with pid 2047377 00:32:11.574 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2047377 00:32:11.574 Received shutdown signal, test time was about 2.000000 seconds 00:32:11.574 00:32:11.574 Latency(us) 00:32:11.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.575 =================================================================================================================== 00:32:11.575 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2047377 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2047822 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2047822 /var/tmp/bperf.sock 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2047822 ']' 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:11.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:11.575 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:11.575 [2024-07-15 10:05:28.357710] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:11.575 [2024-07-15 10:05:28.357786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047822 ] 00:32:11.575 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:11.575 Zero copy mechanism will not be used. 00:32:11.833 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.833 [2024-07-15 10:05:28.390495] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:11.833 [2024-07-15 10:05:28.422144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.833 [2024-07-15 10:05:28.514806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.833 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:11.833 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:11.833 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:11.833 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:11.833 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:12.397 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:12.397 10:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:12.654 nvme0n1 00:32:12.654 10:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:12.654 10:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:12.654 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:12.654 Zero copy mechanism will not be used. 00:32:12.654 Running I/O for 2 seconds... 00:32:14.558 00:32:14.558 Latency(us) 00:32:14.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.558 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:14.558 nvme0n1 : 2.00 3053.07 381.63 0.00 0.00 5229.09 3082.62 8786.68 00:32:14.558 =================================================================================================================== 00:32:14.558 Total : 3053.07 381.63 0.00 0.00 5229.09 3082.62 8786.68 00:32:14.558 0 00:32:14.558 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:14.558 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:14.558 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:14.558 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:14.558 | select(.opcode=="crc32c") 00:32:14.558 | "\(.module_name) \(.executed)"' 00:32:14.558 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:14.816 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:14.816 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:14.816 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:14.816 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:14.816 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2047822 00:32:14.816 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2047822 ']' 00:32:14.816 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2047822 00:32:14.816 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:14.816 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:14.816 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2047822 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2047822' 00:32:15.076 killing process with pid 2047822 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2047822 00:32:15.076 Received shutdown signal, test time was about 2.000000 seconds 00:32:15.076 00:32:15.076 Latency(us) 00:32:15.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.076 =================================================================================================================== 00:32:15.076 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2047822 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2046538 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2046538 ']' 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2046538 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2046538 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2046538' 00:32:15.076 killing process with pid 2046538 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2046538 00:32:15.076 10:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2046538 00:32:15.334 00:32:15.334 real 0m14.963s 00:32:15.334 user 0m29.682s 00:32:15.334 sys 0m4.002s 00:32:15.334 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:15.334 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:15.334 ************************************ 00:32:15.334 END TEST nvmf_digest_clean 00:32:15.334 ************************************ 00:32:15.334 10:05:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:32:15.334 10:05:32 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:15.334 10:05:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:15.334 10:05:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:15.334 10:05:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.595 ************************************ 00:32:15.595 START TEST nvmf_digest_error 00:32:15.595 ************************************ 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2048335 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2048335 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2048335 ']' 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:15.595 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:15.595 [2024-07-15 10:05:32.190229] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:15.595 [2024-07-15 10:05:32.190325] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.595 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.595 [2024-07-15 10:05:32.229620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:15.595 [2024-07-15 10:05:32.256510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.595 [2024-07-15 10:05:32.343363] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.595 [2024-07-15 10:05:32.343439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.595 [2024-07-15 10:05:32.343465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.595 [2024-07-15 10:05:32.343476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.595 [2024-07-15 10:05:32.343486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.595 [2024-07-15 10:05:32.343511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:15.854 [2024-07-15 10:05:32.432118] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:15.854 null0 00:32:15.854 [2024-07-15 10:05:32.550853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.854 [2024-07-15 10:05:32.575095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2048356 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2048356 /var/tmp/bperf.sock 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2048356 ']' 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:15.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:15.854 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:15.854 [2024-07-15 10:05:32.622320] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:15.854 [2024-07-15 10:05:32.622394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048356 ] 00:32:16.113 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.113 [2024-07-15 10:05:32.655294] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:16.113 [2024-07-15 10:05:32.684870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.113 [2024-07-15 10:05:32.776764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.113 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:16.113 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:16.113 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:16.113 10:05:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:16.372 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:16.372 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.372 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:16.372 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.372 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:16.372 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:16.944 nvme0n1 00:32:16.944 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:16.944 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.944 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:16.944 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.944 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:16.944 10:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:16.944 Running I/O for 2 seconds... 00:32:17.202 [2024-07-15 10:05:33.742870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.742938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.742957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.758894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.758938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.758956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.769650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.769695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.769711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.785573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.785602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.785633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.798980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.799020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.799037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.810072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.810101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.810132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.822745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.822775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.822791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.835634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.835664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.835680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.849822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.849851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.849891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.860854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.860891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.860909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.875113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.875157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.875173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.887360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.202 [2024-07-15 10:05:33.887389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.202 [2024-07-15 10:05:33.887420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.202 [2024-07-15 10:05:33.901745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.203 [2024-07-15 10:05:33.901775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.203 [2024-07-15 10:05:33.901791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.203 [2024-07-15 10:05:33.912971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.203 [2024-07-15 10:05:33.913000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.203 [2024-07-15 10:05:33.913016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.203 [2024-07-15 10:05:33.929643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.203 [2024-07-15 10:05:33.929671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.203 [2024-07-15 10:05:33.929703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.203 [2024-07-15 10:05:33.940800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.203 [2024-07-15 10:05:33.940827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.203 [2024-07-15 10:05:33.940858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.203 [2024-07-15 10:05:33.953902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.203 [2024-07-15 10:05:33.953932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.203 [2024-07-15 10:05:33.953955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.203 [2024-07-15 10:05:33.966798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.203 [2024-07-15 10:05:33.966826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.203 [2024-07-15 10:05:33.966857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.203 [2024-07-15 10:05:33.980372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.203 [2024-07-15 10:05:33.980403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.203 [2024-07-15 10:05:33.980419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:33.993045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:33.993076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:33.993092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.004826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.004859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.004886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.019127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.019158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.019175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.033874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.033912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.033928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.045157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.045201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.045218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.059277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.059322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.059338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.072226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.072257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.072274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.085260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.085290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.085306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.095845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.095874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.095920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.109473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.109505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.109521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.123902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.123934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.123951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.134099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.134127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.134161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.147384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.147414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.147431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.160836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.160866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.160895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.172249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.172284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.172310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.185735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.185763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.185779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.198292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.198322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.198338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.462 [2024-07-15 10:05:34.212114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.462 [2024-07-15 10:05:34.212141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-07-15 10:05:34.212170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.463 [2024-07-15 10:05:34.226843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.463 [2024-07-15 10:05:34.226873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-07-15 10:05:34.226904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.463 [2024-07-15 10:05:34.237997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.463 [2024-07-15 10:05:34.238026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-07-15 10:05:34.238042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.721 [2024-07-15 10:05:34.251613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.721 [2024-07-15 10:05:34.251644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.721 [2024-07-15 10:05:34.251660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.721 [2024-07-15 10:05:34.263318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.721 [2024-07-15 10:05:34.263354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.721 [2024-07-15 10:05:34.263374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.721 [2024-07-15 10:05:34.277177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.721 [2024-07-15 10:05:34.277207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.721 [2024-07-15 10:05:34.277225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.721 [2024-07-15 10:05:34.291303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.721 [2024-07-15 10:05:34.291339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.291356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.304858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.304893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.304910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.317550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.317580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.317596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.329400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.329443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.329458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.344467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.344499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.344517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.360737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.360764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.360794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.374117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.374147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.374163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.386057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.386084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.386114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.399736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.399769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.399787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.414200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.414233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.414252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.427874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.427914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.427932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.439863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.439903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.439922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.452996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.453026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.453057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.465689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.465718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.465749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.478420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.478450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.478482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.722 [2024-07-15 10:05:34.490018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.722 [2024-07-15 10:05:34.490045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.722 [2024-07-15 10:05:34.490076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.505656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.505691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.505710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.521070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.521108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.521146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.536193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.536224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.536240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.547484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.547517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.547535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.561610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.561654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.561669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.575971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.575999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.576029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.588839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.588868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.588908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.602513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.602543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.602560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.613771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.613804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.613822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.627780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.627813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.627831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.643780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.643813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.643831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.655950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.655977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.656008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.670636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.670666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.670683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.685632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.685661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.685677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.696911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.696939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.696955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.710672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.710702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.710718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.724112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.724141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.724157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.736378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.736408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.736425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.750292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.750335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.750356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:17.980 [2024-07-15 10:05:34.762634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:17.980 [2024-07-15 10:05:34.762676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.980 [2024-07-15 10:05:34.762692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.239 [2024-07-15 10:05:34.776846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.239 [2024-07-15 10:05:34.776888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.239 [2024-07-15 10:05:34.776909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.239 [2024-07-15 10:05:34.791905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.239 [2024-07-15 10:05:34.791938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.239 [2024-07-15 10:05:34.791954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.239 [2024-07-15 10:05:34.803525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.239 [2024-07-15 10:05:34.803555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.239 [2024-07-15 10:05:34.803572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.239 [2024-07-15 10:05:34.815731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.239 [2024-07-15 10:05:34.815764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.239 [2024-07-15 10:05:34.815783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.829034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.829061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.829091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.843387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.843418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.843434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.855717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.855760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.855776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.866815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.866850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.866868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.881640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.881670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.881686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.894769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.894799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.894814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.908861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.908914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.908930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.919827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.919860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.919889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.932847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.932888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.932908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.945430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.945459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.945476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.958716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.958748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.958767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.971507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.971536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.971567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.983009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.983036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.983066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:34.998512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:34.998542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:34.998558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.240 [2024-07-15 10:05:35.013948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.240 [2024-07-15 10:05:35.013977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.240 [2024-07-15 10:05:35.013994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.499 [2024-07-15 10:05:35.026812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.026846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.026863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.041034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.041069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.041088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.055743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.055774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.055791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.068088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.068119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.068136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.082920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.082965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.082980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.095462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.095489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.095524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.106499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.106533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.106551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.121858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.121896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.121913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.137184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.137232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.137250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.153416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.153450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.153468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.167868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.167906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.167923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.180548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.180576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.180607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.193161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.193190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.193206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.206815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.206844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.206874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.220800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.220830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.220847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.232130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.232160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.232176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.246560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.246587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.246617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.258477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.258507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.258523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.271856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.271910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.271928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.500 [2024-07-15 10:05:35.283499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.500 [2024-07-15 10:05:35.283545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.500 [2024-07-15 10:05:35.283562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.297576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.297605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.297636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.310938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.310969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.310986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.323135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.323165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.323188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.335472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.335501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.335518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.348587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.348616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.348632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.359099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.359126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.359156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.373752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.373779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.373810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.390049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.390078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.390094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.403516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.403545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.403562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.417522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.417550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.417565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.433500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.433530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.433546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.444847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.444891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.444909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.459517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.459548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.459564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.470209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.470239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.470255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.484450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.484479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.484496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.496176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.496205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.496221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.510231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.510258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.510287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.523118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.759 [2024-07-15 10:05:35.523146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.759 [2024-07-15 10:05:35.523176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.759 [2024-07-15 10:05:35.535732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:18.760 [2024-07-15 10:05:35.535764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.760 [2024-07-15 10:05:35.535781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.018 [2024-07-15 10:05:35.547786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.547817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.547850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.562705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.562751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.562767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.572932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.572960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.572991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.587275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.587302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.587332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.597965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.597993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.598023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.612733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.612764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.612780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.626312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.626340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.626370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.639341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.639370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.639387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.650664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.650694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.650710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.664032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.664062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.664086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.676948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.676976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.677008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.689104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.689133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.689149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.700806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.700833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.700848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.714498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.714527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.714544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 [2024-07-15 10:05:35.727046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x102e0d0) 00:32:19.019 [2024-07-15 10:05:35.727074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.019 [2024-07-15 10:05:35.727091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.019 00:32:19.019 Latency(us) 00:32:19.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.019 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:19.019 nvme0n1 : 2.00 19306.74 75.42 0.00 0.00 6620.43 3495.25 19223.89 00:32:19.019 =================================================================================================================== 00:32:19.019 Total : 19306.74 75.42 0.00 0.00 6620.43 3495.25 19223.89 00:32:19.019 0 00:32:19.019 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:19.019 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:19.019 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:19.019 | .driver_specific 00:32:19.019 | .nvme_error 00:32:19.019 | .status_code 00:32:19.019 | .command_transient_transport_error' 00:32:19.019 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:19.279 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:32:19.280 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2048356 00:32:19.280 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2048356 ']' 00:32:19.280 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2048356 00:32:19.280 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:19.280 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:19.280 10:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2048356 00:32:19.280 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:19.280 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:19.280 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2048356' 00:32:19.280 killing process with pid 2048356 00:32:19.280 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2048356 00:32:19.280 Received shutdown signal, test time was about 2.000000 seconds 00:32:19.280 00:32:19.280 Latency(us) 00:32:19.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.280 =================================================================================================================== 00:32:19.280 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.280 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2048356 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2048765 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2048765 /var/tmp/bperf.sock 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2048765 ']' 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:19.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:19.539 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:19.539 [2024-07-15 10:05:36.301902] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:19.539 [2024-07-15 10:05:36.301982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048765 ] 00:32:19.539 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:19.539 Zero copy mechanism will not be used. 00:32:19.797 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.797 [2024-07-15 10:05:36.336893] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:19.797 [2024-07-15 10:05:36.365771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.797 [2024-07-15 10:05:36.452146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.797 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:19.797 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:19.797 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:19.797 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:20.055 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:20.055 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.055 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:20.315 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.315 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:20.315 10:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:20.574 nvme0n1 00:32:20.574 10:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:20.574 10:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.574 10:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:20.574 10:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.574 10:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:20.574 10:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:20.833 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:20.833 Zero copy mechanism will not be used. 00:32:20.833 Running I/O for 2 seconds... 00:32:20.833 [2024-07-15 10:05:37.411947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.411998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.412017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.423068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.423102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.423120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.433936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.433969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.433987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.443574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.443611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.443641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.453188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.453235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.453255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.462466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.462498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.462532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.471163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.471208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.471226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.480143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.480191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.480208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.488965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.488997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.489015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.498371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.498406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.498425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.507196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.507254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.507270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.515797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.515826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.515858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.833 [2024-07-15 10:05:37.524422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.833 [2024-07-15 10:05:37.524459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.833 [2024-07-15 10:05:37.524492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.834 [2024-07-15 10:05:37.533126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.834 [2024-07-15 10:05:37.533183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.834 [2024-07-15 10:05:37.533200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.834 [2024-07-15 10:05:37.541716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.834 [2024-07-15 10:05:37.541760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.834 [2024-07-15 10:05:37.541778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.834 [2024-07-15 10:05:37.550384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.834 [2024-07-15 10:05:37.550415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.834 [2024-07-15 10:05:37.550447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.834 [2024-07-15 10:05:37.558991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.834 [2024-07-15 10:05:37.559029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.834 [2024-07-15 10:05:37.559047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.834 [2024-07-15 10:05:37.567515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.834 [2024-07-15 10:05:37.567547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.834 [2024-07-15 10:05:37.567564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.834 [2024-07-15 10:05:37.576087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.834 [2024-07-15 10:05:37.576117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.834 [2024-07-15 10:05:37.576135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.834 [2024-07-15 10:05:37.584770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.834 [2024-07-15 10:05:37.584800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.834 [2024-07-15 10:05:37.584834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.834 [2024-07-15 10:05:37.593335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.834 [2024-07-15 10:05:37.593381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.834 [2024-07-15 10:05:37.593398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.834 [2024-07-15 10:05:37.601887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.834 [2024-07-15 10:05:37.601917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.834 [2024-07-15 10:05:37.601933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.834 [2024-07-15 10:05:37.610494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:20.834 [2024-07-15 10:05:37.610524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.834 [2024-07-15 10:05:37.610557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.092 [2024-07-15 10:05:37.619080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.092 [2024-07-15 10:05:37.619111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.092 [2024-07-15 10:05:37.619129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.092 [2024-07-15 10:05:37.627549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.092 [2024-07-15 10:05:37.627593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.092 [2024-07-15 10:05:37.627610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.092 [2024-07-15 10:05:37.636049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.092 [2024-07-15 10:05:37.636080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.636097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.644591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.644622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.644638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.653144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.653174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.653191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.662074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.662105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.662122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.670784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.670821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.670855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.679566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.679597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.679629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.688336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.688366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.688400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.697077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.697108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.697125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.705683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.705712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.705745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.714237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.714267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.714300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.722841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.722896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.722915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.731359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.731389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.731422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.740199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.740229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.740262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.748872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.748911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.748929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.757248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.757278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.757295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.765925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.765955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.765971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.774482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.774512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.774544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.783223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.783267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.783284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.791903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.791933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.791950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.800387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.800417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.800457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.809161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.809192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.809209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.817590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.817634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.817658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.826293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.826323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.826355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.834775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.834820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.834838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.843244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.843273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.843305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.851849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.851902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.851921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.860491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.860520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.860553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.093 [2024-07-15 10:05:37.869182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.093 [2024-07-15 10:05:37.869227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.093 [2024-07-15 10:05:37.869245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.877981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.878013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.353 [2024-07-15 10:05:37.878030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.886414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.886457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.353 [2024-07-15 10:05:37.886474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.894964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.895001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.353 [2024-07-15 10:05:37.895018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.903515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.903559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.353 [2024-07-15 10:05:37.903577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.912150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.912180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.353 [2024-07-15 10:05:37.912197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.920779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.920827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.353 [2024-07-15 10:05:37.920844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.929466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.929511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.353 [2024-07-15 10:05:37.929530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.937954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.937985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.353 [2024-07-15 10:05:37.938009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.946517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.946549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.353 [2024-07-15 10:05:37.946566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.954995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.955026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.353 [2024-07-15 10:05:37.955043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.353 [2024-07-15 10:05:37.963757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.353 [2024-07-15 10:05:37.963802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:37.963819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:37.972536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:37.972566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:37.972583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:37.981141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:37.981171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:37.981188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:37.989670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:37.989715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:37.989733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:37.998262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:37.998292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:37.998309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.006768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.006811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.006828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.015366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.015410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.015429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.024099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.024129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.024146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.032663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.032692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.032724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.041211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.041256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.041280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.049949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.049979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.049995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.058488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.058518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.058550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.067024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.067055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.067072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.075625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.075654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.075687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.084186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.084217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.084234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.092624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.092653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.092670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.101160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.101205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.101221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.109839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.109868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.109911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.118389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.118418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.118451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.127041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.127071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.127088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.354 [2024-07-15 10:05:38.135599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.354 [2024-07-15 10:05:38.135644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.354 [2024-07-15 10:05:38.135660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.614 [2024-07-15 10:05:38.144164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.614 [2024-07-15 10:05:38.144209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.614 [2024-07-15 10:05:38.144226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.614 [2024-07-15 10:05:38.152820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.614 [2024-07-15 10:05:38.152849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.614 [2024-07-15 10:05:38.152891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.614 [2024-07-15 10:05:38.161410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.614 [2024-07-15 10:05:38.161440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.614 [2024-07-15 10:05:38.161455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.614 [2024-07-15 10:05:38.170070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.614 [2024-07-15 10:05:38.170114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.614 [2024-07-15 10:05:38.170131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.614 [2024-07-15 10:05:38.178734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.614 [2024-07-15 10:05:38.178766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.614 [2024-07-15 10:05:38.178783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.187212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.187242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.187280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.195910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.195955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.195972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.204382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.204411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.204443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.212993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.213038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.213055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.221476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.221506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.221523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.230202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.230232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.230263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.238887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.238917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.238933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.247583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.247612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.247644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.256132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.256163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.256179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.264684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.264722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.264755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.273185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.273216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.273248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.282219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.282249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.282283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.292684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.292715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.292748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.303210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.303256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.303272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.313650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.313685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.313703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.321816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.321846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.321884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.331254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.331299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.331315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.340645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.340676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.340708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.349966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.350012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.350030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.359605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.359636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.359668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.368274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.368304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.368335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.376687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.376716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.615 [2024-07-15 10:05:38.376748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.615 [2024-07-15 10:05:38.385336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.615 [2024-07-15 10:05:38.385365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.616 [2024-07-15 10:05:38.385382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.616 [2024-07-15 10:05:38.393765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.616 [2024-07-15 10:05:38.393794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.616 [2024-07-15 10:05:38.393826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.402191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.402222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.402239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.411245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.411275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.411307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.420198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.420227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.420268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.428910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.428943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.428960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.437587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.437633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.437650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.446179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.446225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.446242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.454897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.454928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.454945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.463342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.463371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.463403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.471935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.471966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.471982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.480628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.480658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.480674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.489122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.489152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.489169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.497579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.497633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.497652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.506099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.506129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.506146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.514790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.514820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.514852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.523338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.523366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.523398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.532011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.532041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.532059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.540588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.540618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.540634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.549207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.549253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.549270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.557768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.557798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.557829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.566291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.566323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.566345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.574803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.874 [2024-07-15 10:05:38.574833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.874 [2024-07-15 10:05:38.574864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.874 [2024-07-15 10:05:38.583321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.875 [2024-07-15 10:05:38.583364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.875 [2024-07-15 10:05:38.583382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.875 [2024-07-15 10:05:38.591955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.875 [2024-07-15 10:05:38.592000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.875 [2024-07-15 10:05:38.592017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.875 [2024-07-15 10:05:38.600555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.875 [2024-07-15 10:05:38.600584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.875 [2024-07-15 10:05:38.600616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.875 [2024-07-15 10:05:38.609159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.875 [2024-07-15 10:05:38.609204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.875 [2024-07-15 10:05:38.609220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.875 [2024-07-15 10:05:38.617742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.875 [2024-07-15 10:05:38.617785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.875 [2024-07-15 10:05:38.617801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:21.875 [2024-07-15 10:05:38.626382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.875 [2024-07-15 10:05:38.626412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.875 [2024-07-15 10:05:38.626444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:21.875 [2024-07-15 10:05:38.635180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.875 [2024-07-15 10:05:38.635211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.875 [2024-07-15 10:05:38.635227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.875 [2024-07-15 10:05:38.643843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.875 [2024-07-15 10:05:38.643890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.875 [2024-07-15 10:05:38.643910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:21.875 [2024-07-15 10:05:38.652419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:21.875 [2024-07-15 10:05:38.652449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.875 [2024-07-15 10:05:38.652480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.661078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.661109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.661125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.669634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.669664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.669695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.678195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.678241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.678257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.686706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.686739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.686757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.695208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.695238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.695270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.703784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.703814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.703845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.712395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.712440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.712457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.721141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.721186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.721203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.729833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.729864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.729907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.738435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.738481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.738499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.747012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.747043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.747060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.755954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.755998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.756015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.765212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.765259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.765277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.774476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.774509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.134 [2024-07-15 10:05:38.774528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.134 [2024-07-15 10:05:38.783819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.134 [2024-07-15 10:05:38.783853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.783871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.793118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.793148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.793189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.802471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.802505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.802524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.812015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.812045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.812062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.821440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.821474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.821493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.831405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.831439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.831458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.840718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.840752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.840771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.850001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.850032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.850048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.859236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.859283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.859302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.868646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.868680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.868699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.877953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.877988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.878005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.887186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.887230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.887246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.896360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.896393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.896412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.905599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.905633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.905651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.135 [2024-07-15 10:05:38.914964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.135 [2024-07-15 10:05:38.915009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.135 [2024-07-15 10:05:38.915025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.394 [2024-07-15 10:05:38.924203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.394 [2024-07-15 10:05:38.924248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.394 [2024-07-15 10:05:38.924264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.394 [2024-07-15 10:05:38.933660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.394 [2024-07-15 10:05:38.933693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.394 [2024-07-15 10:05:38.933712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.394 [2024-07-15 10:05:38.942917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.394 [2024-07-15 10:05:38.942965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.394 [2024-07-15 10:05:38.942981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.394 [2024-07-15 10:05:38.952334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.394 [2024-07-15 10:05:38.952368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.394 [2024-07-15 10:05:38.952387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.394 [2024-07-15 10:05:38.961581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.394 [2024-07-15 10:05:38.961616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.394 [2024-07-15 10:05:38.961634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.394 [2024-07-15 10:05:38.970941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.394 [2024-07-15 10:05:38.970971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.394 [2024-07-15 10:05:38.971003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.394 [2024-07-15 10:05:38.980113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:38.980143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:38.980175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:38.989390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:38.989423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:38.989442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.000119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.000150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.000183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.011285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.011320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.011339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.021098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.021130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.021162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.031183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.031214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.031246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.041003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.041033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.041055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.050959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.050989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.051020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.060800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.060836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.060855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.071944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.071991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.072008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.083857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.083913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.083930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.095353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.095389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.095408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.107252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.107288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.107308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.118608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.118643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.118662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.130069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.130101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.130118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.141512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.141548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.141567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.153060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.153090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.153107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.163652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.163687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.163707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.395 [2024-07-15 10:05:39.173565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.395 [2024-07-15 10:05:39.173600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.395 [2024-07-15 10:05:39.173619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.652 [2024-07-15 10:05:39.184972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.652 [2024-07-15 10:05:39.185017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.652 [2024-07-15 10:05:39.185033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.652 [2024-07-15 10:05:39.196177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.652 [2024-07-15 10:05:39.196224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.652 [2024-07-15 10:05:39.196244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.652 [2024-07-15 10:05:39.207061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.652 [2024-07-15 10:05:39.207110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.652 [2024-07-15 10:05:39.207127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.652 [2024-07-15 10:05:39.217613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.652 [2024-07-15 10:05:39.217648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.652 [2024-07-15 10:05:39.217667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.652 [2024-07-15 10:05:39.229186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.652 [2024-07-15 10:05:39.229234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.652 [2024-07-15 10:05:39.229260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.652 [2024-07-15 10:05:39.240293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.652 [2024-07-15 10:05:39.240329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.652 [2024-07-15 10:05:39.240348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.652 [2024-07-15 10:05:39.252403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.652 [2024-07-15 10:05:39.252438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.252457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.263642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.263677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.263696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.275043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.275087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.275104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.285247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.285282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.285301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.296473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.296509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.296528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.307500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.307536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.307554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.318407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.318443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.318462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.328064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.328117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.328134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.338429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.338464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.338484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.348094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.348125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.348142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.357358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.357392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.357411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.366698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.366732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.366751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.376045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.376089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.376106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.385323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.385358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.385376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.394747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.394781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.394800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:22.653 [2024-07-15 10:05:39.404111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd0bf00) 00:32:22.653 [2024-07-15 10:05:39.404140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.653 [2024-07-15 10:05:39.404157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:22.653 00:32:22.653 Latency(us) 00:32:22.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.653 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:22.653 nvme0n1 : 2.00 3392.12 424.02 0.00 0.00 4710.82 1322.86 11845.03 00:32:22.653 =================================================================================================================== 00:32:22.653 Total : 3392.12 424.02 0.00 0.00 4710.82 1322.86 11845.03 00:32:22.653 0 00:32:22.653 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:22.653 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:22.653 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:22.653 | .driver_specific 00:32:22.653 | .nvme_error 00:32:22.653 | .status_code 00:32:22.653 | .command_transient_transport_error' 00:32:22.653 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:22.910 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:32:22.910 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2048765 00:32:22.910 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2048765 ']' 00:32:22.910 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2048765 00:32:22.910 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:22.910 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:22.910 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2048765 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2048765' 00:32:23.173 killing process with pid 2048765 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2048765 00:32:23.173 Received shutdown signal, test time was about 2.000000 seconds 00:32:23.173 00:32:23.173 Latency(us) 00:32:23.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.173 =================================================================================================================== 00:32:23.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2048765 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2049292 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2049292 /var/tmp/bperf.sock 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2049292 ']' 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:23.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:23.173 10:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:23.173 [2024-07-15 10:05:39.955954] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:23.173 [2024-07-15 10:05:39.956035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049292 ] 00:32:23.430 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.430 [2024-07-15 10:05:39.988393] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:23.430 [2024-07-15 10:05:40.016535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.430 [2024-07-15 10:05:40.108886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.688 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:23.688 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:23.688 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:23.688 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:23.945 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:23.945 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.945 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:23.945 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.945 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:23.945 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.204 nvme0n1 00:32:24.204 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:24.204 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.204 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:24.204 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.204 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:24.204 10:05:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:24.204 Running I/O for 2 seconds... 00:32:24.204 [2024-07-15 10:05:40.938721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.204 [2024-07-15 10:05:40.939018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.204 [2024-07-15 10:05:40.939055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.204 [2024-07-15 10:05:40.953152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.204 [2024-07-15 10:05:40.953437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.204 [2024-07-15 10:05:40.953469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.204 [2024-07-15 10:05:40.967489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.204 [2024-07-15 10:05:40.967761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.204 [2024-07-15 10:05:40.967793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.204 [2024-07-15 10:05:40.981669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.204 [2024-07-15 10:05:40.981950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.204 [2024-07-15 10:05:40.981978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:40.996062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:40.996343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:40.996374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.010247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.010511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.010542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.024321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.024584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.024615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.038406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.038669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.038698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.052428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.052691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.052721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.066520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.066793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.066829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.080702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.080978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.081006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.094661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.094924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.094966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.108715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.108996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.109024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.122669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.122952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.122979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.136693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.136978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.137006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.150656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.150922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.150965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.164698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.164974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.165002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.178728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.179014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.179041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.192605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.192907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.192939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.206564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.206828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.206860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.220744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.221082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.464 [2024-07-15 10:05:41.221110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.464 [2024-07-15 10:05:41.234696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.464 [2024-07-15 10:05:41.234971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.465 [2024-07-15 10:05:41.234999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.248729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.249065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.249092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.262823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.263152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.263194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.276892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.277247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.277277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.290976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.291295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.291325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.305013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.305307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.305337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.319063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.319327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.319356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.333142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.333427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.333457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.347150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.347432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.347462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.361283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.361545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.361575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.375342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.375606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.375636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.389408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.389671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.389700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.403512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.403776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.403806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.417566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.417829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.417859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.431602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.431865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.431909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.445656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.445942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.445970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.459618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.459893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.459926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.473688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.473970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.473998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.487748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.488082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.488109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.725 [2024-07-15 10:05:41.501859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.725 [2024-07-15 10:05:41.502187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.725 [2024-07-15 10:05:41.502232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.985 [2024-07-15 10:05:41.515947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.985 [2024-07-15 10:05:41.516275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.985 [2024-07-15 10:05:41.516306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.985 [2024-07-15 10:05:41.529980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.985 [2024-07-15 10:05:41.530301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.985 [2024-07-15 10:05:41.530331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.985 [2024-07-15 10:05:41.544076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.985 [2024-07-15 10:05:41.544344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.985 [2024-07-15 10:05:41.544374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.558147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.558442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.558472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.572098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.572369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.572398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.586130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.586401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.586431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.600239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.600501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.600531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.614232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.614495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.614525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.628219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.628484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.628513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.642309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.642571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.642600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.656347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.656618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.656648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.670401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.670665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.670695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.684404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.684666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.684695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.698411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.698682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.698713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.712320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.712584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.712616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.726397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.726662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.726693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.740402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.740664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.740694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.754499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.754762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.754792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.986 [2024-07-15 10:05:41.768582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:24.986 [2024-07-15 10:05:41.768847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.986 [2024-07-15 10:05:41.768883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.245 [2024-07-15 10:05:41.782605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.245 [2024-07-15 10:05:41.782885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.245 [2024-07-15 10:05:41.782916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.245 [2024-07-15 10:05:41.796556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.245 [2024-07-15 10:05:41.796824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.245 [2024-07-15 10:05:41.796858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.245 [2024-07-15 10:05:41.810507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.245 [2024-07-15 10:05:41.810772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.245 [2024-07-15 10:05:41.810802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.245 [2024-07-15 10:05:41.824598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.245 [2024-07-15 10:05:41.824869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.245 [2024-07-15 10:05:41.824918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.245 [2024-07-15 10:05:41.838692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.245 [2024-07-15 10:05:41.838969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.245 [2024-07-15 10:05:41.838996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.245 [2024-07-15 10:05:41.852619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.245 [2024-07-15 10:05:41.852889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.245 [2024-07-15 10:05:41.852920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.245 [2024-07-15 10:05:41.866515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.245 [2024-07-15 10:05:41.866747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.245 [2024-07-15 10:05:41.866776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.245 [2024-07-15 10:05:41.880539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:41.880780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:41.880807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.246 [2024-07-15 10:05:41.894147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:41.894426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:41.894456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.246 [2024-07-15 10:05:41.908095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:41.908371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:41.908400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.246 [2024-07-15 10:05:41.922036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:41.922316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:41.922345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.246 [2024-07-15 10:05:41.936069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:41.936339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:41.936369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.246 [2024-07-15 10:05:41.950087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:41.950366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:41.950395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.246 [2024-07-15 10:05:41.963086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:41.963366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:41.963398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.246 [2024-07-15 10:05:41.977078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:41.977347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:41.977377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.246 [2024-07-15 10:05:41.991148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:41.991440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:41.991471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.246 [2024-07-15 10:05:42.005186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:42.005461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:42.005491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.246 [2024-07-15 10:05:42.019194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.246 [2024-07-15 10:05:42.019457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.246 [2024-07-15 10:05:42.019484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.506 [2024-07-15 10:05:42.033153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.506 [2024-07-15 10:05:42.033399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.506 [2024-07-15 10:05:42.033426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.506 [2024-07-15 10:05:42.047097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.506 [2024-07-15 10:05:42.047370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.506 [2024-07-15 10:05:42.047401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.506 [2024-07-15 10:05:42.061126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.506 [2024-07-15 10:05:42.061407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.506 [2024-07-15 10:05:42.061437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.506 [2024-07-15 10:05:42.075121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.506 [2024-07-15 10:05:42.075407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.506 [2024-07-15 10:05:42.075437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.506 [2024-07-15 10:05:42.089120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.506 [2024-07-15 10:05:42.089396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.089426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.103195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.103477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.103518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.117134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.117404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.117434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.131097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.131381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.131411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.145169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.145445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.145474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.159088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.159366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.159395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.173061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.173334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.173364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.186969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.187181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.187224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.200992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.201272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.201310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.214840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.215085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.215114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.229109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.229381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.229413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.243049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.243320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.243350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.257008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.257267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.257297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.270867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.271106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.271132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.507 [2024-07-15 10:05:42.284689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.507 [2024-07-15 10:05:42.284972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.507 [2024-07-15 10:05:42.285004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.298614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.298889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.298932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.312605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.312875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.312927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.326506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.326778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.326807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.340412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.340676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.340705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.354364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.354628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.354658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.368355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.368616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.368645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.382367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.382640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.382669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.396404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.396671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.396701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.410322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.410594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.410623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.424345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.424610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.424639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.438243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.438478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.438507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.452139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.452424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.452454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.466007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.466242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.466274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.479856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.480182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.480228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.493920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.494234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.494264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.507942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.508201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.508244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.521940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.522248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.522278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.536040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.536369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.536400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:25.767 [2024-07-15 10:05:42.550012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:25.767 [2024-07-15 10:05:42.550272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:25.767 [2024-07-15 10:05:42.550301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.028 [2024-07-15 10:05:42.564021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.028 [2024-07-15 10:05:42.564301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.028 [2024-07-15 10:05:42.564331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.028 [2024-07-15 10:05:42.578007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.028 [2024-07-15 10:05:42.578258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.028 [2024-07-15 10:05:42.578288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.028 [2024-07-15 10:05:42.591962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.028 [2024-07-15 10:05:42.592279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.028 [2024-07-15 10:05:42.592308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.028 [2024-07-15 10:05:42.605966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.606290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.606319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.619873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.620207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.620253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.633783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.634128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.634154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.647713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.647990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.648023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.661647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.661930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.661957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.675523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.675783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.675813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.689502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.689768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.689797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.703456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.703692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.703723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.717348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.717610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.717643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.731289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.731556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.731586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.745154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.745436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.745466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.759219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.759490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.759519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.773174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.773463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.773494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.787071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.787334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.787364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.029 [2024-07-15 10:05:42.801077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.029 [2024-07-15 10:05:42.801359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.029 [2024-07-15 10:05:42.801389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.289 [2024-07-15 10:05:42.815073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.289 [2024-07-15 10:05:42.815351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.289 [2024-07-15 10:05:42.815381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.289 [2024-07-15 10:05:42.829106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.289 [2024-07-15 10:05:42.829381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.289 [2024-07-15 10:05:42.829411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.289 [2024-07-15 10:05:42.843078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.289 [2024-07-15 10:05:42.843350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.289 [2024-07-15 10:05:42.843380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.289 [2024-07-15 10:05:42.857037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.289 [2024-07-15 10:05:42.857314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.289 [2024-07-15 10:05:42.857344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.289 [2024-07-15 10:05:42.871071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.289 [2024-07-15 10:05:42.871343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.289 [2024-07-15 10:05:42.871373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.289 [2024-07-15 10:05:42.885093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.289 [2024-07-15 10:05:42.885369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.289 [2024-07-15 10:05:42.885398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.289 [2024-07-15 10:05:42.899138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.289 [2024-07-15 10:05:42.899415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.289 [2024-07-15 10:05:42.899445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.289 [2024-07-15 10:05:42.913069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.289 [2024-07-15 10:05:42.913345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.289 [2024-07-15 10:05:42.913374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.289 [2024-07-15 10:05:42.926936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154a9f0) with pdu=0x2000190fe2e8 00:32:26.289 [2024-07-15 10:05:42.927179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:26.289 [2024-07-15 10:05:42.927223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:26.289 00:32:26.289 Latency(us) 00:32:26.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.289 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.289 nvme0n1 : 2.01 18194.55 71.07 0.00 0.00 7018.09 6189.51 14563.56 00:32:26.289 =================================================================================================================== 00:32:26.289 Total : 18194.55 71.07 0.00 0.00 7018.09 6189.51 14563.56 00:32:26.289 0 00:32:26.289 10:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:26.289 10:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:26.289 10:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:26.289 | .driver_specific 00:32:26.289 | .nvme_error 00:32:26.289 | .status_code 00:32:26.289 | .command_transient_transport_error' 00:32:26.289 10:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2049292 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2049292 ']' 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2049292 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2049292 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2049292' 00:32:26.550 killing process with pid 2049292 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2049292 00:32:26.550 Received shutdown signal, test time was about 2.000000 seconds 00:32:26.550 00:32:26.550 Latency(us) 00:32:26.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.550 =================================================================================================================== 00:32:26.550 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:26.550 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2049292 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2049693 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2049693 /var/tmp/bperf.sock 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2049693 ']' 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:26.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:26.808 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:26.808 [2024-07-15 10:05:43.499248] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:26.808 [2024-07-15 10:05:43.499335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049693 ] 00:32:26.808 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:26.808 Zero copy mechanism will not be used. 00:32:26.808 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.808 [2024-07-15 10:05:43.531660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:26.808 [2024-07-15 10:05:43.561342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.066 [2024-07-15 10:05:43.651763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.066 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:27.066 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:27.066 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:27.066 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:27.323 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:27.323 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.323 10:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:27.323 10:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.323 10:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:27.323 10:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:27.581 nvme0n1 00:32:27.581 10:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:27.581 10:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.581 10:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:27.581 10:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.581 10:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:27.581 10:05:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:27.840 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:27.840 Zero copy mechanism will not be used. 00:32:27.840 Running I/O for 2 seconds... 00:32:27.840 [2024-07-15 10:05:44.477816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.478224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.478265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.489513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.489894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.489942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.501726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.502104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.502149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.513998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.514365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.514398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.525547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.525942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.525971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.537405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.537774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.537807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.548969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.549330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.549378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.561467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.561825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.561884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.573117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.573500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.573547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.583959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.584296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.584324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.595413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.595612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.595641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.605932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.606310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.606339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.840 [2024-07-15 10:05:44.616308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:27.840 [2024-07-15 10:05:44.616749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.840 [2024-07-15 10:05:44.616778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.626347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.626800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.626843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.635421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.635872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.635930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.645796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.646206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.646250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.656074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.656415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.656442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.665445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.665866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.665921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.675937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.676300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.676328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.685974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.686358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.686400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.696429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.696857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.696892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.707090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.707537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.707564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.717102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.717510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.717538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.728036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.728484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.728511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.738475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.738937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.738965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.749032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.749473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.749515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.759271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.759604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.759632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.769427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.769730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.769772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.779902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.780335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.780362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.790484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.790846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.790874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.800515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.800865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.800901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.810279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.810680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.810722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.821135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.821595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.821622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.831476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.099 [2024-07-15 10:05:44.831854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.099 [2024-07-15 10:05:44.831903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.099 [2024-07-15 10:05:44.842504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.100 [2024-07-15 10:05:44.842918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.100 [2024-07-15 10:05:44.842946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.100 [2024-07-15 10:05:44.853966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.100 [2024-07-15 10:05:44.854330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.100 [2024-07-15 10:05:44.854358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.100 [2024-07-15 10:05:44.864459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.100 [2024-07-15 10:05:44.864895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.100 [2024-07-15 10:05:44.864923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.100 [2024-07-15 10:05:44.874858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.100 [2024-07-15 10:05:44.875312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.100 [2024-07-15 10:05:44.875342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.358 [2024-07-15 10:05:44.885135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.358 [2024-07-15 10:05:44.885536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.358 [2024-07-15 10:05:44.885567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.358 [2024-07-15 10:05:44.895751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.358 [2024-07-15 10:05:44.896126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.358 [2024-07-15 10:05:44.896156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.358 [2024-07-15 10:05:44.905224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.358 [2024-07-15 10:05:44.905640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:44.905675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:44.915283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:44.915667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:44.915696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:44.925599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:44.926013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:44.926041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:44.935486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:44.935905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:44.935934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:44.946871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:44.947345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:44.947374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:44.957723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:44.958064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:44.958092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:44.968612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:44.969057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:44.969085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:44.978526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:44.978947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:44.978975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:44.989399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:44.989795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:44.989823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.000131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.000513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.000557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.010655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.011058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.011085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.021317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.021695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.021723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.031548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.031997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.032025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.041954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.042300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.042329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.051771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.052280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.052306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.061736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.062135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.062164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.072134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.072533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.072561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.082862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.083260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.083289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.093352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.093805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.093836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.103519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.104005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.104032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.113757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.114114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.114142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.124512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.124939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.124967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.359 [2024-07-15 10:05:45.135235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.359 [2024-07-15 10:05:45.135696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.359 [2024-07-15 10:05:45.135725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.145368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.145833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.145861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.155750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.156189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.156218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.166504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.166912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.166941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.176950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.177487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.177514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.187238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.187617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.187645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.197587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.197996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.198024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.208065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.208424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.208452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.218038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.218397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.218425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.229212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.229573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.229601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.238868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.239206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.239234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.249947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.250378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.250407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.260470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.260887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.260915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.270430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.270697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.270725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.280457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.280817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.280845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.290308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.290629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.290657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.301957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.302350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.302379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.312723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.313123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.313151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.323617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.323985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.324014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.333818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.620 [2024-07-15 10:05:45.334127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.620 [2024-07-15 10:05:45.334155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.620 [2024-07-15 10:05:45.343967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.621 [2024-07-15 10:05:45.344280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.621 [2024-07-15 10:05:45.344308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.621 [2024-07-15 10:05:45.354314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.621 [2024-07-15 10:05:45.354753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.621 [2024-07-15 10:05:45.354787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.621 [2024-07-15 10:05:45.364561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.621 [2024-07-15 10:05:45.364935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.621 [2024-07-15 10:05:45.364963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.621 [2024-07-15 10:05:45.374463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.621 [2024-07-15 10:05:45.374841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.621 [2024-07-15 10:05:45.374869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.621 [2024-07-15 10:05:45.383920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.621 [2024-07-15 10:05:45.384270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.621 [2024-07-15 10:05:45.384299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.621 [2024-07-15 10:05:45.393696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.621 [2024-07-15 10:05:45.394115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.621 [2024-07-15 10:05:45.394145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.621 [2024-07-15 10:05:45.403344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.403737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.403768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.413210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.413570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.413598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.423949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.424339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.424368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.434349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.434685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.434713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.445405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.445828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.445857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.454869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.455279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.455306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.465514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.465954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.465984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.476021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.476452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.476480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.486942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.487404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.487432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.498595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.499077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.499105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.509988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.510389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.510431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.520353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.520746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.520774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.882 [2024-07-15 10:05:45.530527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.882 [2024-07-15 10:05:45.530918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.882 [2024-07-15 10:05:45.530947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.540306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.540668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.540696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.550097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.550397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.550425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.560064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.560388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.560416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.569907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.570199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.570227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.579979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.580307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.580335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.589257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.589602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.589631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.598687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.599137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.599165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.609174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.609434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.609463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.619074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.619426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.619461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.628662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.629046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.629074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.638892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.639198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.639227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.649293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.649708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.649738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.883 [2024-07-15 10:05:45.660458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:28.883 [2024-07-15 10:05:45.660823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.883 [2024-07-15 10:05:45.660852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.670082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.670439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.670468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.679601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.679866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.679904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.689958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.690374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.690402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.700796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.701131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.701159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.710800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.711108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.711137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.720948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.721296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.721324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.730438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.730785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.730813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.739895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.740206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.740234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.749751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.750112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.750140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.760434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.760822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.760850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.770658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.770993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.771022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.780809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.781140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.781169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.790596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.791001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.791029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.800761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.801052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.801080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.811123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.811449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.142 [2024-07-15 10:05:45.811476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.142 [2024-07-15 10:05:45.820325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.142 [2024-07-15 10:05:45.820684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.820712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.143 [2024-07-15 10:05:45.829988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.143 [2024-07-15 10:05:45.830301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.830329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.143 [2024-07-15 10:05:45.839218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.143 [2024-07-15 10:05:45.839590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.839633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.143 [2024-07-15 10:05:45.848532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.143 [2024-07-15 10:05:45.848871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.848907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.143 [2024-07-15 10:05:45.857756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.143 [2024-07-15 10:05:45.858118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.858146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.143 [2024-07-15 10:05:45.867292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.143 [2024-07-15 10:05:45.867608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.867635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.143 [2024-07-15 10:05:45.877546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.143 [2024-07-15 10:05:45.877957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.877992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.143 [2024-07-15 10:05:45.887718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.143 [2024-07-15 10:05:45.888141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.888170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.143 [2024-07-15 10:05:45.897869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.143 [2024-07-15 10:05:45.898233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.898273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.143 [2024-07-15 10:05:45.908079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.143 [2024-07-15 10:05:45.908463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.908492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.143 [2024-07-15 10:05:45.918172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.143 [2024-07-15 10:05:45.918544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.143 [2024-07-15 10:05:45.918588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.403 [2024-07-15 10:05:45.927688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.403 [2024-07-15 10:05:45.928050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.403 [2024-07-15 10:05:45.928079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.403 [2024-07-15 10:05:45.937894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.403 [2024-07-15 10:05:45.938266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.403 [2024-07-15 10:05:45.938294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.403 [2024-07-15 10:05:45.949083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.403 [2024-07-15 10:05:45.949436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.403 [2024-07-15 10:05:45.949464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.403 [2024-07-15 10:05:45.959311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.403 [2024-07-15 10:05:45.959662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.403 [2024-07-15 10:05:45.959709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:45.969548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:45.969977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:45.970005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:45.980352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:45.980702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:45.980731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:45.990783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:45.991127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:45.991155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.000715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.001045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.001074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.011315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.011656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.011684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.021372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.021716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.021744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.032116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.032434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.032461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.042494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.042809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.042837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.052221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.052576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.052608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.063160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.063517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.063545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.073317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.073612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.073641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.083565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.083919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.083958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.094273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.094652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.094680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.104970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.105300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.105328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.115409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.115784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.115811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.125557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.125958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.125987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.135855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.136198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.136227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.146448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.146734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.146763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.157129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.157458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.157488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.166677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.167026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.167057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.404 [2024-07-15 10:05:46.177566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.404 [2024-07-15 10:05:46.177884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.404 [2024-07-15 10:05:46.177914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.187388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.187796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.187826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.197445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.197888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.197917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.208096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.208428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.208457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.217990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.218363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.218391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.228278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.228745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.228786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.238556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.238936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.238965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.248076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.248390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.248418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.258833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.259229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.259257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.268788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.269161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.269189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.278403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.278763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.278790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.288262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.288686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.288714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.298271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.298593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.298621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.308245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.308631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.308659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.316951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.317284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.317320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.326941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.327214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.327242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.337105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.337421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.337449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.346812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.347123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.347152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.356313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.356607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.356635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.366026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.366451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.366478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.376502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.376884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.376912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.387373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.387758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.387786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.397055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.397556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.397584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.406016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.406441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.406469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.416342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.416708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.416738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.426208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.426683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.426713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.437202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.437625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.437654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.664 [2024-07-15 10:05:46.446978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.664 [2024-07-15 10:05:46.447284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.664 [2024-07-15 10:05:46.447313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.923 [2024-07-15 10:05:46.457374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154ad30) with pdu=0x2000190fef90 00:32:29.923 [2024-07-15 10:05:46.457691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.923 [2024-07-15 10:05:46.457720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.923 00:32:29.923 Latency(us) 00:32:29.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.923 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:29.923 nvme0n1 : 2.00 2996.43 374.55 0.00 0.00 5327.79 3810.80 16505.36 00:32:29.923 =================================================================================================================== 00:32:29.923 Total : 2996.43 374.55 0.00 0.00 5327.79 3810.80 16505.36 00:32:29.923 0 00:32:29.923 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:29.923 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:29.923 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:29.923 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:29.923 | .driver_specific 00:32:29.923 | .nvme_error 00:32:29.923 | .status_code 00:32:29.923 | .command_transient_transport_error' 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2049693 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2049693 ']' 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2049693 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2049693 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2049693' 00:32:30.181 killing process with pid 2049693 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2049693 00:32:30.181 Received shutdown signal, test time was about 2.000000 seconds 00:32:30.181 00:32:30.181 Latency(us) 00:32:30.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.181 =================================================================================================================== 00:32:30.181 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.181 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2049693 00:32:30.440 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2048335 00:32:30.440 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2048335 ']' 00:32:30.440 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2048335 00:32:30.440 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:30.440 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:30.440 10:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2048335 00:32:30.440 10:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:30.440 10:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:30.440 10:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2048335' 00:32:30.440 killing process with pid 2048335 00:32:30.440 10:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2048335 00:32:30.440 10:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2048335 00:32:30.440 00:32:30.440 real 0m15.079s 00:32:30.440 user 0m30.104s 00:32:30.440 sys 0m4.076s 00:32:30.440 10:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:30.440 10:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:30.440 ************************************ 00:32:30.440 END TEST nvmf_digest_error 00:32:30.440 ************************************ 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:30.700 rmmod nvme_tcp 00:32:30.700 rmmod nvme_fabrics 00:32:30.700 rmmod nvme_keyring 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2048335 ']' 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2048335 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2048335 ']' 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2048335 00:32:30.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2048335) - No such process 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2048335 is not found' 00:32:30.700 Process with pid 2048335 is not found 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:30.700 10:05:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.607 10:05:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:32.607 00:32:32.607 real 0m34.394s 00:32:32.607 user 1m0.602s 00:32:32.607 sys 0m9.598s 00:32:32.607 10:05:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:32.607 10:05:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:32.607 ************************************ 00:32:32.607 END TEST nvmf_digest 00:32:32.607 ************************************ 00:32:32.607 10:05:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:32.607 10:05:49 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:32:32.607 10:05:49 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:32:32.607 10:05:49 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:32:32.607 10:05:49 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:32.607 10:05:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:32.607 10:05:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:32.607 10:05:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.607 ************************************ 00:32:32.607 START TEST nvmf_bdevperf 00:32:32.607 ************************************ 00:32:32.607 10:05:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:32.865 * Looking for test storage... 00:32:32.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:32.865 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:32.866 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.866 10:05:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.866 10:05:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.866 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:32.866 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:32.866 10:05:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:32.866 10:05:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.770 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:34.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:34.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:34.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:34.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:34.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:32:34.771 00:32:34.771 --- 10.0.0.2 ping statistics --- 00:32:34.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.771 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:32:34.771 00:32:34.771 --- 10.0.0.1 ping statistics --- 00:32:34.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.771 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2052037 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2052037 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2052037 ']' 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:34.771 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:35.030 [2024-07-15 10:05:51.574116] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:35.030 [2024-07-15 10:05:51.574187] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.030 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.030 [2024-07-15 10:05:51.611527] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:35.030 [2024-07-15 10:05:51.637571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:35.030 [2024-07-15 10:05:51.721991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.030 [2024-07-15 10:05:51.722041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.030 [2024-07-15 10:05:51.722069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.030 [2024-07-15 10:05:51.722080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.030 [2024-07-15 10:05:51.722089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.030 [2024-07-15 10:05:51.722170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:35.030 [2024-07-15 10:05:51.722236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:35.030 [2024-07-15 10:05:51.722242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:35.288 [2024-07-15 10:05:51.860061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:35.288 Malloc0 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:35.288 [2024-07-15 10:05:51.924807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:35.288 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:35.288 { 00:32:35.288 "params": { 00:32:35.288 "name": "Nvme$subsystem", 00:32:35.288 "trtype": "$TEST_TRANSPORT", 00:32:35.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.288 "adrfam": "ipv4", 00:32:35.288 "trsvcid": "$NVMF_PORT", 00:32:35.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.289 "hdgst": ${hdgst:-false}, 00:32:35.289 "ddgst": ${ddgst:-false} 00:32:35.289 }, 00:32:35.289 "method": "bdev_nvme_attach_controller" 00:32:35.289 } 00:32:35.289 EOF 00:32:35.289 )") 00:32:35.289 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:35.289 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:35.289 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:35.289 10:05:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:35.289 "params": { 00:32:35.289 "name": "Nvme1", 00:32:35.289 "trtype": "tcp", 00:32:35.289 "traddr": "10.0.0.2", 00:32:35.289 "adrfam": "ipv4", 00:32:35.289 "trsvcid": "4420", 00:32:35.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:35.289 "hdgst": false, 00:32:35.289 "ddgst": false 00:32:35.289 }, 00:32:35.289 "method": "bdev_nvme_attach_controller" 00:32:35.289 }' 00:32:35.289 [2024-07-15 10:05:51.974555] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:35.289 [2024-07-15 10:05:51.974621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052068 ] 00:32:35.289 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.289 [2024-07-15 10:05:52.006150] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:35.289 [2024-07-15 10:05:52.034921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.547 [2024-07-15 10:05:52.126400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.805 Running I/O for 1 seconds... 00:32:36.771 00:32:36.771 Latency(us) 00:32:36.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.771 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:36.771 Verification LBA range: start 0x0 length 0x4000 00:32:36.771 Nvme1n1 : 1.01 8586.65 33.54 0.00 0.00 14848.05 1978.22 15534.46 00:32:36.771 =================================================================================================================== 00:32:36.771 Total : 8586.65 33.54 0.00 0.00 14848.05 1978.22 15534.46 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2052327 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:37.051 { 00:32:37.051 "params": { 00:32:37.051 "name": "Nvme$subsystem", 00:32:37.051 "trtype": "$TEST_TRANSPORT", 00:32:37.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.051 "adrfam": "ipv4", 00:32:37.051 "trsvcid": "$NVMF_PORT", 00:32:37.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.051 "hdgst": ${hdgst:-false}, 00:32:37.051 "ddgst": ${ddgst:-false} 00:32:37.051 }, 00:32:37.051 "method": "bdev_nvme_attach_controller" 00:32:37.051 } 00:32:37.051 EOF 00:32:37.051 )") 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:37.051 10:05:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:37.051 "params": { 00:32:37.051 "name": "Nvme1", 00:32:37.051 "trtype": "tcp", 00:32:37.051 "traddr": "10.0.0.2", 00:32:37.051 "adrfam": "ipv4", 00:32:37.051 "trsvcid": "4420", 00:32:37.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:37.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:37.051 "hdgst": false, 00:32:37.051 "ddgst": false 00:32:37.051 }, 00:32:37.051 "method": "bdev_nvme_attach_controller" 00:32:37.051 }' 00:32:37.051 [2024-07-15 10:05:53.728816] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:37.051 [2024-07-15 10:05:53.728919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052327 ] 00:32:37.051 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.051 [2024-07-15 10:05:53.761107] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:37.051 [2024-07-15 10:05:53.789661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.309 [2024-07-15 10:05:53.874276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.569 Running I/O for 15 seconds... 00:32:40.106 10:05:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2052037 00:32:40.106 10:05:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:40.106 [2024-07-15 10:05:56.701857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.701947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.701979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.702980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.702994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.703009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.703023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.703042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.703060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.703075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.703089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.703104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.703119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.703135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.703149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.703164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.703192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.106 [2024-07-15 10:05:56.703207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.106 [2024-07-15 10:05:56.703220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.107 [2024-07-15 10:05:56.703729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.107 [2024-07-15 10:05:56.703762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.107 [2024-07-15 10:05:56.703794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.107 [2024-07-15 10:05:56.703826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.107 [2024-07-15 10:05:56.703859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.107 [2024-07-15 10:05:56.703904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.107 [2024-07-15 10:05:56.703952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.703967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.703981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.107 [2024-07-15 10:05:56.704656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.107 [2024-07-15 10:05:56.704671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.704687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.704703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.704719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.704735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.704757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.704773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.704789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.704805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.704822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.704837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.704854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.704869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.704893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.704928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.704944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.704957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.704972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.704986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.108 [2024-07-15 10:05:56.705377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.705979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.705994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.706007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.706026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.706040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.706055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.108 [2024-07-15 10:05:56.706069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.108 [2024-07-15 10:05:56.706083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.109 [2024-07-15 10:05:56.706097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.109 [2024-07-15 10:05:56.706112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.109 [2024-07-15 10:05:56.706125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.109 [2024-07-15 10:05:56.706140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.109 [2024-07-15 10:05:56.706169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.109 [2024-07-15 10:05:56.706187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.109 [2024-07-15 10:05:56.706203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.109 [2024-07-15 10:05:56.706219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd0d60 is same with the state(5) to be set 00:32:40.109 [2024-07-15 10:05:56.706237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:40.109 [2024-07-15 10:05:56.706250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:40.109 [2024-07-15 10:05:56.706263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41264 len:8 PRP1 0x0 PRP2 0x0 00:32:40.109 [2024-07-15 10:05:56.706277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.109 [2024-07-15 10:05:56.706342] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bd0d60 was disconnected and freed. reset controller. 00:32:40.109 [2024-07-15 10:05:56.706418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.109 [2024-07-15 10:05:56.706441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.109 [2024-07-15 10:05:56.706458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.109 [2024-07-15 10:05:56.706486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.109 [2024-07-15 10:05:56.706500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.109 [2024-07-15 10:05:56.706512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.109 [2024-07-15 10:05:56.706525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.109 [2024-07-15 10:05:56.706552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.109 [2024-07-15 10:05:56.706569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.109 [2024-07-15 10:05:56.710370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.109 [2024-07-15 10:05:56.710412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.109 [2024-07-15 10:05:56.711121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.109 [2024-07-15 10:05:56.711150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.109 [2024-07-15 10:05:56.711166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.109 [2024-07-15 10:05:56.711417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.109 [2024-07-15 10:05:56.711661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.109 [2024-07-15 10:05:56.711685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.109 [2024-07-15 10:05:56.711704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.109 [2024-07-15 10:05:56.715284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.109 [2024-07-15 10:05:56.724554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.109 [2024-07-15 10:05:56.724994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.109 [2024-07-15 10:05:56.725025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.109 [2024-07-15 10:05:56.725043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.109 [2024-07-15 10:05:56.725281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.109 [2024-07-15 10:05:56.725522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.109 [2024-07-15 10:05:56.725545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.109 [2024-07-15 10:05:56.725560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.109 [2024-07-15 10:05:56.729134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.109 [2024-07-15 10:05:56.738413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.109 [2024-07-15 10:05:56.738857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.109 [2024-07-15 10:05:56.738896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.109 [2024-07-15 10:05:56.738915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.109 [2024-07-15 10:05:56.739153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.109 [2024-07-15 10:05:56.739394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.109 [2024-07-15 10:05:56.739417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.109 [2024-07-15 10:05:56.739432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.109 [2024-07-15 10:05:56.743008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.109 [2024-07-15 10:05:56.752281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.109 [2024-07-15 10:05:56.752716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.109 [2024-07-15 10:05:56.752743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.109 [2024-07-15 10:05:56.752758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.109 [2024-07-15 10:05:56.753022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.109 [2024-07-15 10:05:56.753266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.109 [2024-07-15 10:05:56.753289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.109 [2024-07-15 10:05:56.753303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.109 [2024-07-15 10:05:56.756866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.109 [2024-07-15 10:05:56.766142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.109 [2024-07-15 10:05:56.766549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.109 [2024-07-15 10:05:56.766580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.109 [2024-07-15 10:05:56.766598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.109 [2024-07-15 10:05:56.766835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.109 [2024-07-15 10:05:56.767088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.109 [2024-07-15 10:05:56.767112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.109 [2024-07-15 10:05:56.767127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.109 [2024-07-15 10:05:56.770695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.109 [2024-07-15 10:05:56.779970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.109 [2024-07-15 10:05:56.780375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.109 [2024-07-15 10:05:56.780406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.109 [2024-07-15 10:05:56.780424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.109 [2024-07-15 10:05:56.780661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.109 [2024-07-15 10:05:56.780913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.109 [2024-07-15 10:05:56.780937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.109 [2024-07-15 10:05:56.780952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.109 [2024-07-15 10:05:56.784514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.109 [2024-07-15 10:05:56.793993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.109 [2024-07-15 10:05:56.794439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.109 [2024-07-15 10:05:56.794470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.109 [2024-07-15 10:05:56.794487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.109 [2024-07-15 10:05:56.794731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.109 [2024-07-15 10:05:56.794985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.109 [2024-07-15 10:05:56.795009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.109 [2024-07-15 10:05:56.795024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.109 [2024-07-15 10:05:56.798588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.109 [2024-07-15 10:05:56.807868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.109 [2024-07-15 10:05:56.808311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.109 [2024-07-15 10:05:56.808342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.109 [2024-07-15 10:05:56.808359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.109 [2024-07-15 10:05:56.808596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.109 [2024-07-15 10:05:56.808838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.109 [2024-07-15 10:05:56.808861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.109 [2024-07-15 10:05:56.808887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.109 [2024-07-15 10:05:56.812456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.109 [2024-07-15 10:05:56.821725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.110 [2024-07-15 10:05:56.822167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.110 [2024-07-15 10:05:56.822198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.110 [2024-07-15 10:05:56.822215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.110 [2024-07-15 10:05:56.822452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.110 [2024-07-15 10:05:56.822693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.110 [2024-07-15 10:05:56.822716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.110 [2024-07-15 10:05:56.822731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.110 [2024-07-15 10:05:56.826305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.110 [2024-07-15 10:05:56.835573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.110 [2024-07-15 10:05:56.835953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.110 [2024-07-15 10:05:56.835984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.110 [2024-07-15 10:05:56.836001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.110 [2024-07-15 10:05:56.836239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.110 [2024-07-15 10:05:56.836480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.110 [2024-07-15 10:05:56.836503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.110 [2024-07-15 10:05:56.836523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.110 [2024-07-15 10:05:56.840109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.110 [2024-07-15 10:05:56.849587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.110 [2024-07-15 10:05:56.850006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.110 [2024-07-15 10:05:56.850037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.110 [2024-07-15 10:05:56.850054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.110 [2024-07-15 10:05:56.850292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.110 [2024-07-15 10:05:56.850533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.110 [2024-07-15 10:05:56.850556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.110 [2024-07-15 10:05:56.850572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.110 [2024-07-15 10:05:56.854147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.110 [2024-07-15 10:05:56.863415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.110 [2024-07-15 10:05:56.863821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.110 [2024-07-15 10:05:56.863851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.110 [2024-07-15 10:05:56.863869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.110 [2024-07-15 10:05:56.864117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.110 [2024-07-15 10:05:56.864358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.110 [2024-07-15 10:05:56.864381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.110 [2024-07-15 10:05:56.864396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.110 [2024-07-15 10:05:56.867966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.110 [2024-07-15 10:05:56.877444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.110 [2024-07-15 10:05:56.877887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.110 [2024-07-15 10:05:56.877929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.110 [2024-07-15 10:05:56.877945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.110 [2024-07-15 10:05:56.878202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.110 [2024-07-15 10:05:56.878444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.110 [2024-07-15 10:05:56.878467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.110 [2024-07-15 10:05:56.878482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.110 [2024-07-15 10:05:56.882092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.371 [2024-07-15 10:05:56.891382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.371 [2024-07-15 10:05:56.891924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.371 [2024-07-15 10:05:56.891956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.371 [2024-07-15 10:05:56.891973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.371 [2024-07-15 10:05:56.892211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.371 [2024-07-15 10:05:56.892453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.371 [2024-07-15 10:05:56.892477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.371 [2024-07-15 10:05:56.892492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.371 [2024-07-15 10:05:56.896069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.371 [2024-07-15 10:05:56.905358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.371 [2024-07-15 10:05:56.905791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.371 [2024-07-15 10:05:56.905822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.371 [2024-07-15 10:05:56.905839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.371 [2024-07-15 10:05:56.906086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.371 [2024-07-15 10:05:56.906327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.371 [2024-07-15 10:05:56.906350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.371 [2024-07-15 10:05:56.906366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.371 [2024-07-15 10:05:56.909941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.371 [2024-07-15 10:05:56.919282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.371 [2024-07-15 10:05:56.919705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.371 [2024-07-15 10:05:56.919736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.371 [2024-07-15 10:05:56.919753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.371 [2024-07-15 10:05:56.920001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.371 [2024-07-15 10:05:56.920243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.371 [2024-07-15 10:05:56.920267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.371 [2024-07-15 10:05:56.920282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.371 [2024-07-15 10:05:56.923857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.371 [2024-07-15 10:05:56.933157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.371 [2024-07-15 10:05:56.933579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.371 [2024-07-15 10:05:56.933610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.371 [2024-07-15 10:05:56.933627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.371 [2024-07-15 10:05:56.933865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.371 [2024-07-15 10:05:56.934124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.371 [2024-07-15 10:05:56.934148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.371 [2024-07-15 10:05:56.934163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.371 [2024-07-15 10:05:56.937732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.372 [2024-07-15 10:05:56.947022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.372 [2024-07-15 10:05:56.947451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.372 [2024-07-15 10:05:56.947482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.372 [2024-07-15 10:05:56.947500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.372 [2024-07-15 10:05:56.947737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.372 [2024-07-15 10:05:56.947992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.372 [2024-07-15 10:05:56.948016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.372 [2024-07-15 10:05:56.948031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.372 [2024-07-15 10:05:56.951599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.372 [2024-07-15 10:05:56.960896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.372 [2024-07-15 10:05:56.961301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.372 [2024-07-15 10:05:56.961332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.372 [2024-07-15 10:05:56.961349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.372 [2024-07-15 10:05:56.961587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.372 [2024-07-15 10:05:56.961828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.372 [2024-07-15 10:05:56.961852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.372 [2024-07-15 10:05:56.961867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.372 [2024-07-15 10:05:56.965395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.372 [2024-07-15 10:05:56.974948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.372 [2024-07-15 10:05:56.975397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.372 [2024-07-15 10:05:56.975428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.372 [2024-07-15 10:05:56.975445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.372 [2024-07-15 10:05:56.975682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.372 [2024-07-15 10:05:56.975937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.372 [2024-07-15 10:05:56.975961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.372 [2024-07-15 10:05:56.975976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.372 [2024-07-15 10:05:56.979553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.372 [2024-07-15 10:05:56.988851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.372 [2024-07-15 10:05:56.989311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.372 [2024-07-15 10:05:56.989339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.372 [2024-07-15 10:05:56.989370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.372 [2024-07-15 10:05:56.989626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.372 [2024-07-15 10:05:56.989867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.372 [2024-07-15 10:05:56.989902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.372 [2024-07-15 10:05:56.989918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.372 [2024-07-15 10:05:56.993488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.372 [2024-07-15 10:05:57.002775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.372 [2024-07-15 10:05:57.003244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.372 [2024-07-15 10:05:57.003276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.372 [2024-07-15 10:05:57.003293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.372 [2024-07-15 10:05:57.003530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.372 [2024-07-15 10:05:57.003772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.372 [2024-07-15 10:05:57.003795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.372 [2024-07-15 10:05:57.003810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.372 [2024-07-15 10:05:57.007388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.372 [2024-07-15 10:05:57.016660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.372 [2024-07-15 10:05:57.017072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.372 [2024-07-15 10:05:57.017099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.372 [2024-07-15 10:05:57.017130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.372 [2024-07-15 10:05:57.017371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.372 [2024-07-15 10:05:57.017575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.372 [2024-07-15 10:05:57.017595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.372 [2024-07-15 10:05:57.017608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.372 [2024-07-15 10:05:57.021156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.372 [2024-07-15 10:05:57.030650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.372 [2024-07-15 10:05:57.031052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.372 [2024-07-15 10:05:57.031081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.372 [2024-07-15 10:05:57.031102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.372 [2024-07-15 10:05:57.031351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.372 [2024-07-15 10:05:57.031592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.372 [2024-07-15 10:05:57.031616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.372 [2024-07-15 10:05:57.031631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.372 [2024-07-15 10:05:57.035261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.372 [2024-07-15 10:05:57.044640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.372 [2024-07-15 10:05:57.045072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.372 [2024-07-15 10:05:57.045100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.372 [2024-07-15 10:05:57.045115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.372 [2024-07-15 10:05:57.045363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.372 [2024-07-15 10:05:57.045605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.372 [2024-07-15 10:05:57.045628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.372 [2024-07-15 10:05:57.045643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.372 [2024-07-15 10:05:57.049258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.372 [2024-07-15 10:05:57.058557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.372 [2024-07-15 10:05:57.058967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.372 [2024-07-15 10:05:57.058996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.372 [2024-07-15 10:05:57.059012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.372 [2024-07-15 10:05:57.059253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.373 [2024-07-15 10:05:57.059495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.373 [2024-07-15 10:05:57.059518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.373 [2024-07-15 10:05:57.059533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.373 [2024-07-15 10:05:57.063099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.373 [2024-07-15 10:05:57.072559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.373 [2024-07-15 10:05:57.073000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.373 [2024-07-15 10:05:57.073033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.373 [2024-07-15 10:05:57.073050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.373 [2024-07-15 10:05:57.073288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.373 [2024-07-15 10:05:57.073529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.373 [2024-07-15 10:05:57.073558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.373 [2024-07-15 10:05:57.073574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.373 [2024-07-15 10:05:57.077133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.373 [2024-07-15 10:05:57.086537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.373 [2024-07-15 10:05:57.086934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.373 [2024-07-15 10:05:57.086966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.373 [2024-07-15 10:05:57.086983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.373 [2024-07-15 10:05:57.087220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.373 [2024-07-15 10:05:57.087461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.373 [2024-07-15 10:05:57.087485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.373 [2024-07-15 10:05:57.087500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.373 [2024-07-15 10:05:57.091071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.373 [2024-07-15 10:05:57.100545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.373 [2024-07-15 10:05:57.100959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.373 [2024-07-15 10:05:57.100991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.373 [2024-07-15 10:05:57.101009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.373 [2024-07-15 10:05:57.101246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.373 [2024-07-15 10:05:57.101488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.373 [2024-07-15 10:05:57.101511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.373 [2024-07-15 10:05:57.101526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.373 [2024-07-15 10:05:57.105127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.373 [2024-07-15 10:05:57.114416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.373 [2024-07-15 10:05:57.114856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.373 [2024-07-15 10:05:57.114895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.373 [2024-07-15 10:05:57.114914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.373 [2024-07-15 10:05:57.115152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.373 [2024-07-15 10:05:57.115393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.373 [2024-07-15 10:05:57.115416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.373 [2024-07-15 10:05:57.115430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.373 [2024-07-15 10:05:57.118990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.373 [2024-07-15 10:05:57.128439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.373 [2024-07-15 10:05:57.128819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.373 [2024-07-15 10:05:57.128850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.373 [2024-07-15 10:05:57.128868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.373 [2024-07-15 10:05:57.129143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.373 [2024-07-15 10:05:57.129394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.373 [2024-07-15 10:05:57.129417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.373 [2024-07-15 10:05:57.129433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.373 [2024-07-15 10:05:57.132993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.373 [2024-07-15 10:05:57.142457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.373 [2024-07-15 10:05:57.142891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.373 [2024-07-15 10:05:57.142938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.373 [2024-07-15 10:05:57.142954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.373 [2024-07-15 10:05:57.143195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.373 [2024-07-15 10:05:57.143456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.373 [2024-07-15 10:05:57.143480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.373 [2024-07-15 10:05:57.143495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.373 [2024-07-15 10:05:57.147030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.635 [2024-07-15 10:05:57.156056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.635 [2024-07-15 10:05:57.156440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.635 [2024-07-15 10:05:57.156466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.635 [2024-07-15 10:05:57.156481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.635 [2024-07-15 10:05:57.156722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.635 [2024-07-15 10:05:57.156957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.635 [2024-07-15 10:05:57.156977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.635 [2024-07-15 10:05:57.156990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.635 [2024-07-15 10:05:57.160128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.635 [2024-07-15 10:05:57.169371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.635 [2024-07-15 10:05:57.169793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.635 [2024-07-15 10:05:57.169820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.635 [2024-07-15 10:05:57.169856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.635 [2024-07-15 10:05:57.170104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.635 [2024-07-15 10:05:57.170319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.635 [2024-07-15 10:05:57.170339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.635 [2024-07-15 10:05:57.170351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.635 [2024-07-15 10:05:57.173319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.635 [2024-07-15 10:05:57.182600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.635 [2024-07-15 10:05:57.183008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.635 [2024-07-15 10:05:57.183037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.635 [2024-07-15 10:05:57.183053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.635 [2024-07-15 10:05:57.183291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.635 [2024-07-15 10:05:57.183488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.635 [2024-07-15 10:05:57.183507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.635 [2024-07-15 10:05:57.183519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.635 [2024-07-15 10:05:57.186500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.635 [2024-07-15 10:05:57.195786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.635 [2024-07-15 10:05:57.196199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.635 [2024-07-15 10:05:57.196227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.635 [2024-07-15 10:05:57.196243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.635 [2024-07-15 10:05:57.196484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.635 [2024-07-15 10:05:57.196698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.635 [2024-07-15 10:05:57.196717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.635 [2024-07-15 10:05:57.196729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.635 [2024-07-15 10:05:57.199651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.635 [2024-07-15 10:05:57.209131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.635 [2024-07-15 10:05:57.209552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.635 [2024-07-15 10:05:57.209594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.635 [2024-07-15 10:05:57.209610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.635 [2024-07-15 10:05:57.209884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.635 [2024-07-15 10:05:57.210108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.635 [2024-07-15 10:05:57.210134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.635 [2024-07-15 10:05:57.210148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.635 [2024-07-15 10:05:57.213121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.635 [2024-07-15 10:05:57.222317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.635 [2024-07-15 10:05:57.222722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.635 [2024-07-15 10:05:57.222749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.635 [2024-07-15 10:05:57.222765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.635 [2024-07-15 10:05:57.223007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.635 [2024-07-15 10:05:57.223206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.635 [2024-07-15 10:05:57.223225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.635 [2024-07-15 10:05:57.223237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.635 [2024-07-15 10:05:57.226215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.635 [2024-07-15 10:05:57.235525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.635 [2024-07-15 10:05:57.235932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.235961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.235977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.236219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.236431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.236450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.236462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.239477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.636 [2024-07-15 10:05:57.248757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.636 [2024-07-15 10:05:57.249141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.249169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.249185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.249428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.249643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.249662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.249674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.252606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.636 [2024-07-15 10:05:57.262040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.636 [2024-07-15 10:05:57.262508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.262536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.262551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.262792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.263058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.263080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.263094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.266045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.636 [2024-07-15 10:05:57.275348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.636 [2024-07-15 10:05:57.275818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.275846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.275861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.276110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.276328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.276348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.276360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.279332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.636 [2024-07-15 10:05:57.288625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.636 [2024-07-15 10:05:57.289006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.289035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.289051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.289292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.289490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.289509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.289521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.292503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.636 [2024-07-15 10:05:57.301972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.636 [2024-07-15 10:05:57.302381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.302422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.302438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.302686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.302907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.302927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.302939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.305909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.636 [2024-07-15 10:05:57.315241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.636 [2024-07-15 10:05:57.315649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.315677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.315692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.315943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.316147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.316167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.316179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.319203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.636 [2024-07-15 10:05:57.328519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.636 [2024-07-15 10:05:57.329000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.329029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.329044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.329284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.329497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.329516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.329529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.332509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.636 [2024-07-15 10:05:57.341768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.636 [2024-07-15 10:05:57.342185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.342214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.342230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.342473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.342686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.342705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.342722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.345700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.636 [2024-07-15 10:05:57.355001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.636 [2024-07-15 10:05:57.355395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.355422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.355452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.355688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.355910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.355930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.355943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.358912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.636 [2024-07-15 10:05:57.368353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.636 [2024-07-15 10:05:57.368784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.636 [2024-07-15 10:05:57.368826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.636 [2024-07-15 10:05:57.368841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.636 [2024-07-15 10:05:57.369078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.636 [2024-07-15 10:05:57.369313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.636 [2024-07-15 10:05:57.369332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.636 [2024-07-15 10:05:57.369344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.636 [2024-07-15 10:05:57.372232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.637 [2024-07-15 10:05:57.381545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.637 [2024-07-15 10:05:57.382013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.637 [2024-07-15 10:05:57.382041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.637 [2024-07-15 10:05:57.382057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.637 [2024-07-15 10:05:57.382310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.637 [2024-07-15 10:05:57.382509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.637 [2024-07-15 10:05:57.382528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.637 [2024-07-15 10:05:57.382540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.637 [2024-07-15 10:05:57.385515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.637 [2024-07-15 10:05:57.394760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.637 [2024-07-15 10:05:57.395175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.637 [2024-07-15 10:05:57.395207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.637 [2024-07-15 10:05:57.395239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.637 [2024-07-15 10:05:57.395505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.637 [2024-07-15 10:05:57.395703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.637 [2024-07-15 10:05:57.395722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.637 [2024-07-15 10:05:57.395734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.637 [2024-07-15 10:05:57.398751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.637 [2024-07-15 10:05:57.408023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.637 [2024-07-15 10:05:57.408449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.637 [2024-07-15 10:05:57.408491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.637 [2024-07-15 10:05:57.408507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.637 [2024-07-15 10:05:57.408764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.637 [2024-07-15 10:05:57.409008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.637 [2024-07-15 10:05:57.409028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.637 [2024-07-15 10:05:57.409041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.637 [2024-07-15 10:05:57.412010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.899 [2024-07-15 10:05:57.421389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.899 [2024-07-15 10:05:57.421812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.899 [2024-07-15 10:05:57.421839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.899 [2024-07-15 10:05:57.421870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.899 [2024-07-15 10:05:57.422121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.899 [2024-07-15 10:05:57.422355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.899 [2024-07-15 10:05:57.422375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.899 [2024-07-15 10:05:57.422388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.899 [2024-07-15 10:05:57.425593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.899 [2024-07-15 10:05:57.434682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.899 [2024-07-15 10:05:57.435160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.899 [2024-07-15 10:05:57.435189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.899 [2024-07-15 10:05:57.435204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.899 [2024-07-15 10:05:57.435443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.899 [2024-07-15 10:05:57.435661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.899 [2024-07-15 10:05:57.435681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.899 [2024-07-15 10:05:57.435693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.899 [2024-07-15 10:05:57.438715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.899 [2024-07-15 10:05:57.447977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.899 [2024-07-15 10:05:57.448385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.899 [2024-07-15 10:05:57.448412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.899 [2024-07-15 10:05:57.448442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.899 [2024-07-15 10:05:57.448683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.899 [2024-07-15 10:05:57.448926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.899 [2024-07-15 10:05:57.448947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.899 [2024-07-15 10:05:57.448960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.899 [2024-07-15 10:05:57.451827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.899 [2024-07-15 10:05:57.461193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.899 [2024-07-15 10:05:57.461599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.899 [2024-07-15 10:05:57.461632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.899 [2024-07-15 10:05:57.461647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.899 [2024-07-15 10:05:57.461912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.899 [2024-07-15 10:05:57.462123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.899 [2024-07-15 10:05:57.462145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.899 [2024-07-15 10:05:57.462157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.899 [2024-07-15 10:05:57.465090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.899 [2024-07-15 10:05:57.474499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.899 [2024-07-15 10:05:57.474934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.899 [2024-07-15 10:05:57.474963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.899 [2024-07-15 10:05:57.474979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.899 [2024-07-15 10:05:57.475208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.899 [2024-07-15 10:05:57.475424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.899 [2024-07-15 10:05:57.475443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.899 [2024-07-15 10:05:57.475455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.899 [2024-07-15 10:05:57.478432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.899 [2024-07-15 10:05:57.487752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.899 [2024-07-15 10:05:57.488139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.899 [2024-07-15 10:05:57.488168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.899 [2024-07-15 10:05:57.488184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.899 [2024-07-15 10:05:57.488424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.899 [2024-07-15 10:05:57.488622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.899 [2024-07-15 10:05:57.488641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.899 [2024-07-15 10:05:57.488653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.899 [2024-07-15 10:05:57.491616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.899 [2024-07-15 10:05:57.501107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.899 [2024-07-15 10:05:57.501500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.899 [2024-07-15 10:05:57.501528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.899 [2024-07-15 10:05:57.501544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.899 [2024-07-15 10:05:57.501784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.899 [2024-07-15 10:05:57.502025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.899 [2024-07-15 10:05:57.502046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.899 [2024-07-15 10:05:57.502059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.899 [2024-07-15 10:05:57.505065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.899 [2024-07-15 10:05:57.514452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.899 [2024-07-15 10:05:57.514856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.899 [2024-07-15 10:05:57.514890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.899 [2024-07-15 10:05:57.514907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.899 [2024-07-15 10:05:57.515159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.899 [2024-07-15 10:05:57.515356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.899 [2024-07-15 10:05:57.515375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.899 [2024-07-15 10:05:57.515387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.899 [2024-07-15 10:05:57.518388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.899 [2024-07-15 10:05:57.527743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.899 [2024-07-15 10:05:57.528198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.899 [2024-07-15 10:05:57.528226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.899 [2024-07-15 10:05:57.528261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.899 [2024-07-15 10:05:57.528516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.899 [2024-07-15 10:05:57.528714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.899 [2024-07-15 10:05:57.528734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.899 [2024-07-15 10:05:57.528746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.899 [2024-07-15 10:05:57.531773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.899 [2024-07-15 10:05:57.541041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.899 [2024-07-15 10:05:57.541435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.899 [2024-07-15 10:05:57.541464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.899 [2024-07-15 10:05:57.541480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.899 [2024-07-15 10:05:57.541720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.899 [2024-07-15 10:05:57.541943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.541962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.541974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.544976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.900 [2024-07-15 10:05:57.554311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.900 [2024-07-15 10:05:57.554718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.900 [2024-07-15 10:05:57.554746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.900 [2024-07-15 10:05:57.554762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.900 [2024-07-15 10:05:57.555025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.900 [2024-07-15 10:05:57.555224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.555244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.555256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.558265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.900 [2024-07-15 10:05:57.567579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.900 [2024-07-15 10:05:57.567987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.900 [2024-07-15 10:05:57.568016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.900 [2024-07-15 10:05:57.568032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.900 [2024-07-15 10:05:57.568269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.900 [2024-07-15 10:05:57.568467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.568491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.568504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.571510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.900 [2024-07-15 10:05:57.580801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.900 [2024-07-15 10:05:57.581361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.900 [2024-07-15 10:05:57.581389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.900 [2024-07-15 10:05:57.581405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.900 [2024-07-15 10:05:57.581657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.900 [2024-07-15 10:05:57.581855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.581881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.581911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.584920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.900 [2024-07-15 10:05:57.594202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.900 [2024-07-15 10:05:57.594609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.900 [2024-07-15 10:05:57.594637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.900 [2024-07-15 10:05:57.594652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.900 [2024-07-15 10:05:57.594902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.900 [2024-07-15 10:05:57.595106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.595126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.595138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.598106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.900 [2024-07-15 10:05:57.607437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.900 [2024-07-15 10:05:57.607911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.900 [2024-07-15 10:05:57.607940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.900 [2024-07-15 10:05:57.607956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.900 [2024-07-15 10:05:57.608183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.900 [2024-07-15 10:05:57.608397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.608416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.608428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.611437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.900 [2024-07-15 10:05:57.620673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.900 [2024-07-15 10:05:57.621105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.900 [2024-07-15 10:05:57.621133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.900 [2024-07-15 10:05:57.621149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.900 [2024-07-15 10:05:57.621390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.900 [2024-07-15 10:05:57.621605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.621624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.621637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.624575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.900 [2024-07-15 10:05:57.633996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.900 [2024-07-15 10:05:57.634465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.900 [2024-07-15 10:05:57.634493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.900 [2024-07-15 10:05:57.634509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.900 [2024-07-15 10:05:57.634749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.900 [2024-07-15 10:05:57.634991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.635011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.635024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.637999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.900 [2024-07-15 10:05:57.647270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.900 [2024-07-15 10:05:57.647687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.900 [2024-07-15 10:05:57.647712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.900 [2024-07-15 10:05:57.647742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.900 [2024-07-15 10:05:57.647971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.900 [2024-07-15 10:05:57.648189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.648209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.648221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.651235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.900 [2024-07-15 10:05:57.660515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.900 [2024-07-15 10:05:57.660956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.900 [2024-07-15 10:05:57.660984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.900 [2024-07-15 10:05:57.661000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.900 [2024-07-15 10:05:57.661247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.900 [2024-07-15 10:05:57.661446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.661465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.661477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.664463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.900 [2024-07-15 10:05:57.673820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:40.900 [2024-07-15 10:05:57.674316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.900 [2024-07-15 10:05:57.674343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:40.900 [2024-07-15 10:05:57.674374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:40.900 [2024-07-15 10:05:57.674629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:40.900 [2024-07-15 10:05:57.674827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:40.900 [2024-07-15 10:05:57.674846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:40.900 [2024-07-15 10:05:57.674858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:40.900 [2024-07-15 10:05:57.678002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.163 [2024-07-15 10:05:57.687272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.163 [2024-07-15 10:05:57.687719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.163 [2024-07-15 10:05:57.687746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.163 [2024-07-15 10:05:57.687762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.163 [2024-07-15 10:05:57.688032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.163 [2024-07-15 10:05:57.688290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.163 [2024-07-15 10:05:57.688310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.163 [2024-07-15 10:05:57.688323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.163 [2024-07-15 10:05:57.691348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.163 [2024-07-15 10:05:57.700344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.163 [2024-07-15 10:05:57.700753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.163 [2024-07-15 10:05:57.700780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.163 [2024-07-15 10:05:57.700796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.163 [2024-07-15 10:05:57.701046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.163 [2024-07-15 10:05:57.701262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.163 [2024-07-15 10:05:57.701281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.163 [2024-07-15 10:05:57.701298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.163 [2024-07-15 10:05:57.704270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.163 [2024-07-15 10:05:57.713636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.163 [2024-07-15 10:05:57.714116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.163 [2024-07-15 10:05:57.714144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.163 [2024-07-15 10:05:57.714160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.163 [2024-07-15 10:05:57.714415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.163 [2024-07-15 10:05:57.714613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.163 [2024-07-15 10:05:57.714632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.163 [2024-07-15 10:05:57.714644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.163 [2024-07-15 10:05:57.717643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.163 [2024-07-15 10:05:57.726848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.163 [2024-07-15 10:05:57.727205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.163 [2024-07-15 10:05:57.727246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.163 [2024-07-15 10:05:57.727261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.163 [2024-07-15 10:05:57.727497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.163 [2024-07-15 10:05:57.727709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.163 [2024-07-15 10:05:57.727728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.163 [2024-07-15 10:05:57.727739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.163 [2024-07-15 10:05:57.730995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.163 [2024-07-15 10:05:57.740122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.163 [2024-07-15 10:05:57.740498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.163 [2024-07-15 10:05:57.740526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.163 [2024-07-15 10:05:57.740542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.163 [2024-07-15 10:05:57.740782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.163 [2024-07-15 10:05:57.741009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.163 [2024-07-15 10:05:57.741029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.163 [2024-07-15 10:05:57.741042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.163 [2024-07-15 10:05:57.744012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.163 [2024-07-15 10:05:57.753442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.163 [2024-07-15 10:05:57.753919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.163 [2024-07-15 10:05:57.753948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.163 [2024-07-15 10:05:57.753964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.163 [2024-07-15 10:05:57.754204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.163 [2024-07-15 10:05:57.754417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.163 [2024-07-15 10:05:57.754436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.163 [2024-07-15 10:05:57.754448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.163 [2024-07-15 10:05:57.757432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.163 [2024-07-15 10:05:57.766710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.163 [2024-07-15 10:05:57.767079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.163 [2024-07-15 10:05:57.767106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.163 [2024-07-15 10:05:57.767121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.163 [2024-07-15 10:05:57.767335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.163 [2024-07-15 10:05:57.767533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.163 [2024-07-15 10:05:57.767552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.163 [2024-07-15 10:05:57.767564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.163 [2024-07-15 10:05:57.770580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.163 [2024-07-15 10:05:57.780035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.163 [2024-07-15 10:05:57.780442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.163 [2024-07-15 10:05:57.780484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.163 [2024-07-15 10:05:57.780500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.163 [2024-07-15 10:05:57.780743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.163 [2024-07-15 10:05:57.780986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.163 [2024-07-15 10:05:57.781007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.163 [2024-07-15 10:05:57.781020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.163 [2024-07-15 10:05:57.784009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.163 [2024-07-15 10:05:57.793317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.163 [2024-07-15 10:05:57.793761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.163 [2024-07-15 10:05:57.793803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.163 [2024-07-15 10:05:57.793819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.163 [2024-07-15 10:05:57.794076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.163 [2024-07-15 10:05:57.794308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.163 [2024-07-15 10:05:57.794329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.163 [2024-07-15 10:05:57.794341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.163 [2024-07-15 10:05:57.797350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.806674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.164 [2024-07-15 10:05:57.807128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.164 [2024-07-15 10:05:57.807156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.164 [2024-07-15 10:05:57.807172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.164 [2024-07-15 10:05:57.807410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.164 [2024-07-15 10:05:57.807608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.164 [2024-07-15 10:05:57.807627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.164 [2024-07-15 10:05:57.807639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.164 [2024-07-15 10:05:57.810675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.820031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.164 [2024-07-15 10:05:57.820442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.164 [2024-07-15 10:05:57.820471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.164 [2024-07-15 10:05:57.820487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.164 [2024-07-15 10:05:57.820729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.164 [2024-07-15 10:05:57.820950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.164 [2024-07-15 10:05:57.820970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.164 [2024-07-15 10:05:57.820982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.164 [2024-07-15 10:05:57.823827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.833346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.164 [2024-07-15 10:05:57.833750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.164 [2024-07-15 10:05:57.833778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.164 [2024-07-15 10:05:57.833793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.164 [2024-07-15 10:05:57.834030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.164 [2024-07-15 10:05:57.834268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.164 [2024-07-15 10:05:57.834287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.164 [2024-07-15 10:05:57.834304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.164 [2024-07-15 10:05:57.837271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.846592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.164 [2024-07-15 10:05:57.846959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.164 [2024-07-15 10:05:57.846988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.164 [2024-07-15 10:05:57.847003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.164 [2024-07-15 10:05:57.847210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.164 [2024-07-15 10:05:57.847424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.164 [2024-07-15 10:05:57.847443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.164 [2024-07-15 10:05:57.847455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.164 [2024-07-15 10:05:57.850439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.859902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.164 [2024-07-15 10:05:57.860356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.164 [2024-07-15 10:05:57.860384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.164 [2024-07-15 10:05:57.860399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.164 [2024-07-15 10:05:57.860639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.164 [2024-07-15 10:05:57.860852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.164 [2024-07-15 10:05:57.860871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.164 [2024-07-15 10:05:57.860908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.164 [2024-07-15 10:05:57.863858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.873241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.164 [2024-07-15 10:05:57.873634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.164 [2024-07-15 10:05:57.873661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.164 [2024-07-15 10:05:57.873677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.164 [2024-07-15 10:05:57.873941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.164 [2024-07-15 10:05:57.874152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.164 [2024-07-15 10:05:57.874188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.164 [2024-07-15 10:05:57.874200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.164 [2024-07-15 10:05:57.877174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.886476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.164 [2024-07-15 10:05:57.886851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.164 [2024-07-15 10:05:57.886890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.164 [2024-07-15 10:05:57.886909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.164 [2024-07-15 10:05:57.887151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.164 [2024-07-15 10:05:57.887348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.164 [2024-07-15 10:05:57.887367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.164 [2024-07-15 10:05:57.887379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.164 [2024-07-15 10:05:57.890351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.899798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.164 [2024-07-15 10:05:57.900268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.164 [2024-07-15 10:05:57.900296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.164 [2024-07-15 10:05:57.900327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.164 [2024-07-15 10:05:57.900566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.164 [2024-07-15 10:05:57.900764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.164 [2024-07-15 10:05:57.900783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.164 [2024-07-15 10:05:57.900795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.164 [2024-07-15 10:05:57.903771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.913110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.164 [2024-07-15 10:05:57.913563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.164 [2024-07-15 10:05:57.913591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.164 [2024-07-15 10:05:57.913606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.164 [2024-07-15 10:05:57.913846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.164 [2024-07-15 10:05:57.914088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.164 [2024-07-15 10:05:57.914109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.164 [2024-07-15 10:05:57.914122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.164 [2024-07-15 10:05:57.917090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.926365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.164 [2024-07-15 10:05:57.926835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.164 [2024-07-15 10:05:57.926863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.164 [2024-07-15 10:05:57.926887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.164 [2024-07-15 10:05:57.927131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.164 [2024-07-15 10:05:57.927355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.164 [2024-07-15 10:05:57.927374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.164 [2024-07-15 10:05:57.927386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.164 [2024-07-15 10:05:57.930357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.164 [2024-07-15 10:05:57.939690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.165 [2024-07-15 10:05:57.940083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.165 [2024-07-15 10:05:57.940111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.165 [2024-07-15 10:05:57.940126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.165 [2024-07-15 10:05:57.940366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.165 [2024-07-15 10:05:57.940579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.165 [2024-07-15 10:05:57.940598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.165 [2024-07-15 10:05:57.940610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.165 [2024-07-15 10:05:57.943749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.425 [2024-07-15 10:05:57.953224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.425 [2024-07-15 10:05:57.953640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-15 10:05:57.953667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.425 [2024-07-15 10:05:57.953683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.425 [2024-07-15 10:05:57.953926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.425 [2024-07-15 10:05:57.954145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.425 [2024-07-15 10:05:57.954165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.425 [2024-07-15 10:05:57.954178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.425 [2024-07-15 10:05:57.957156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.425 [2024-07-15 10:05:57.966454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.425 [2024-07-15 10:05:57.966872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-15 10:05:57.966919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.425 [2024-07-15 10:05:57.966935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.425 [2024-07-15 10:05:57.967187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.425 [2024-07-15 10:05:57.967385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.425 [2024-07-15 10:05:57.967404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.425 [2024-07-15 10:05:57.967416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.425 [2024-07-15 10:05:57.970432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.425 [2024-07-15 10:05:57.979959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.425 [2024-07-15 10:05:57.980375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-15 10:05:57.980402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.425 [2024-07-15 10:05:57.980418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.425 [2024-07-15 10:05:57.980661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.425 [2024-07-15 10:05:57.980890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.425 [2024-07-15 10:05:57.980911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.425 [2024-07-15 10:05:57.980924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.425 [2024-07-15 10:05:57.983994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.425 [2024-07-15 10:05:57.993261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.425 [2024-07-15 10:05:57.993682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-15 10:05:57.993709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.425 [2024-07-15 10:05:57.993740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.425 [2024-07-15 10:05:57.993975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.425 [2024-07-15 10:05:57.994200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.425 [2024-07-15 10:05:57.994235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.425 [2024-07-15 10:05:57.994247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.425 [2024-07-15 10:05:57.997216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.425 [2024-07-15 10:05:58.006497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.425 [2024-07-15 10:05:58.006943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.425 [2024-07-15 10:05:58.006972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.425 [2024-07-15 10:05:58.006988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.425 [2024-07-15 10:05:58.007230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.426 [2024-07-15 10:05:58.007428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.426 [2024-07-15 10:05:58.007447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.426 [2024-07-15 10:05:58.007459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.426 [2024-07-15 10:05:58.010479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.426 [2024-07-15 10:05:58.019794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.426 [2024-07-15 10:05:58.020277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-15 10:05:58.020319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.426 [2024-07-15 10:05:58.020341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.426 [2024-07-15 10:05:58.020596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.426 [2024-07-15 10:05:58.020794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.426 [2024-07-15 10:05:58.020814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.426 [2024-07-15 10:05:58.020826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.426 [2024-07-15 10:05:58.023805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.426 [2024-07-15 10:05:58.033249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.426 [2024-07-15 10:05:58.033724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-15 10:05:58.033753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.426 [2024-07-15 10:05:58.033769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.426 [2024-07-15 10:05:58.033993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.426 [2024-07-15 10:05:58.034236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.426 [2024-07-15 10:05:58.034256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.426 [2024-07-15 10:05:58.034269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.426 [2024-07-15 10:05:58.037384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.426 [2024-07-15 10:05:58.046636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.426 [2024-07-15 10:05:58.047026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-15 10:05:58.047055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.426 [2024-07-15 10:05:58.047072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.426 [2024-07-15 10:05:58.047315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.426 [2024-07-15 10:05:58.047514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.426 [2024-07-15 10:05:58.047534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.426 [2024-07-15 10:05:58.047546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.426 [2024-07-15 10:05:58.050621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.426 [2024-07-15 10:05:58.060021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.426 [2024-07-15 10:05:58.060580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-15 10:05:58.060622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.426 [2024-07-15 10:05:58.060638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.426 [2024-07-15 10:05:58.060887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.426 [2024-07-15 10:05:58.061127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.426 [2024-07-15 10:05:58.061168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.426 [2024-07-15 10:05:58.061183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.426 [2024-07-15 10:05:58.064319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.426 [2024-07-15 10:05:58.073535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.426 [2024-07-15 10:05:58.073916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-15 10:05:58.073959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.426 [2024-07-15 10:05:58.073975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.426 [2024-07-15 10:05:58.074225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.426 [2024-07-15 10:05:58.074424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.426 [2024-07-15 10:05:58.074442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.426 [2024-07-15 10:05:58.074455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.426 [2024-07-15 10:05:58.077511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.426 [2024-07-15 10:05:58.086904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.426 [2024-07-15 10:05:58.087254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-15 10:05:58.087280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.426 [2024-07-15 10:05:58.087295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.426 [2024-07-15 10:05:58.087509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.426 [2024-07-15 10:05:58.087707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.426 [2024-07-15 10:05:58.087727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.426 [2024-07-15 10:05:58.087739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.426 [2024-07-15 10:05:58.090713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.426 [2024-07-15 10:05:58.100392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.426 [2024-07-15 10:05:58.100864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-15 10:05:58.100899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.426 [2024-07-15 10:05:58.100916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.426 [2024-07-15 10:05:58.101157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.426 [2024-07-15 10:05:58.101370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.426 [2024-07-15 10:05:58.101389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.426 [2024-07-15 10:05:58.101401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.426 [2024-07-15 10:05:58.104415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.426 [2024-07-15 10:05:58.113624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.426 [2024-07-15 10:05:58.114044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.426 [2024-07-15 10:05:58.114072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.426 [2024-07-15 10:05:58.114088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.426 [2024-07-15 10:05:58.114328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.426 [2024-07-15 10:05:58.114526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.426 [2024-07-15 10:05:58.114545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.426 [2024-07-15 10:05:58.114557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.426 [2024-07-15 10:05:58.117499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.426 [2024-07-15 10:05:58.127066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.426 [2024-07-15 10:05:58.127492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-15 10:05:58.127519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.427 [2024-07-15 10:05:58.127550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.427 [2024-07-15 10:05:58.127802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.427 [2024-07-15 10:05:58.128030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.427 [2024-07-15 10:05:58.128051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.427 [2024-07-15 10:05:58.128063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.427 [2024-07-15 10:05:58.131133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.427 [2024-07-15 10:05:58.140424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.427 [2024-07-15 10:05:58.140785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-15 10:05:58.140824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.427 [2024-07-15 10:05:58.140840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.427 [2024-07-15 10:05:58.141095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.427 [2024-07-15 10:05:58.141333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.427 [2024-07-15 10:05:58.141353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.427 [2024-07-15 10:05:58.141365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.427 [2024-07-15 10:05:58.144474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.427 [2024-07-15 10:05:58.153802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.427 [2024-07-15 10:05:58.154253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-15 10:05:58.154294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.427 [2024-07-15 10:05:58.154311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.427 [2024-07-15 10:05:58.154572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.427 [2024-07-15 10:05:58.154770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.427 [2024-07-15 10:05:58.154789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.427 [2024-07-15 10:05:58.154801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.427 [2024-07-15 10:05:58.157799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.427 [2024-07-15 10:05:58.167006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.427 [2024-07-15 10:05:58.167419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-15 10:05:58.167447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.427 [2024-07-15 10:05:58.167462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.427 [2024-07-15 10:05:58.167703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.427 [2024-07-15 10:05:58.167925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.427 [2024-07-15 10:05:58.167945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.427 [2024-07-15 10:05:58.167957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.427 [2024-07-15 10:05:58.170965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.427 [2024-07-15 10:05:58.180319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.427 [2024-07-15 10:05:58.180764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-15 10:05:58.180806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.427 [2024-07-15 10:05:58.180821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.427 [2024-07-15 10:05:58.181083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.427 [2024-07-15 10:05:58.181301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.427 [2024-07-15 10:05:58.181321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.427 [2024-07-15 10:05:58.181333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.427 [2024-07-15 10:05:58.184342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.427 [2024-07-15 10:05:58.193496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.427 [2024-07-15 10:05:58.193896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-15 10:05:58.193924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.427 [2024-07-15 10:05:58.193940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.427 [2024-07-15 10:05:58.194181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.427 [2024-07-15 10:05:58.194394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.427 [2024-07-15 10:05:58.194413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.427 [2024-07-15 10:05:58.194430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.427 [2024-07-15 10:05:58.197402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.427 [2024-07-15 10:05:58.207036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.427 [2024-07-15 10:05:58.207509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.427 [2024-07-15 10:05:58.207537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.427 [2024-07-15 10:05:58.207552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.427 [2024-07-15 10:05:58.207793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.427 [2024-07-15 10:05:58.208033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.427 [2024-07-15 10:05:58.208054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.427 [2024-07-15 10:05:58.208066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.688 [2024-07-15 10:05:58.211272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.688 [2024-07-15 10:05:58.220378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.688 [2024-07-15 10:05:58.220787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.688 [2024-07-15 10:05:58.220815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.688 [2024-07-15 10:05:58.220831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.688 [2024-07-15 10:05:58.221067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.688 [2024-07-15 10:05:58.221291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.688 [2024-07-15 10:05:58.221311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.688 [2024-07-15 10:05:58.221324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.688 [2024-07-15 10:05:58.224803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.688 [2024-07-15 10:05:58.233598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.688 [2024-07-15 10:05:58.234050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.688 [2024-07-15 10:05:58.234079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.688 [2024-07-15 10:05:58.234095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.688 [2024-07-15 10:05:58.234336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.688 [2024-07-15 10:05:58.234550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.688 [2024-07-15 10:05:58.234569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.688 [2024-07-15 10:05:58.234581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.688 [2024-07-15 10:05:58.237520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.688 [2024-07-15 10:05:58.246845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.688 [2024-07-15 10:05:58.247263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.688 [2024-07-15 10:05:58.247291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.688 [2024-07-15 10:05:58.247307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.688 [2024-07-15 10:05:58.247547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.688 [2024-07-15 10:05:58.247759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.688 [2024-07-15 10:05:58.247778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.688 [2024-07-15 10:05:58.247790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.688 [2024-07-15 10:05:58.250763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.688 [2024-07-15 10:05:58.260064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.688 [2024-07-15 10:05:58.260475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.688 [2024-07-15 10:05:58.260515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.688 [2024-07-15 10:05:58.260531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.688 [2024-07-15 10:05:58.260773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.688 [2024-07-15 10:05:58.260998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.688 [2024-07-15 10:05:58.261019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.688 [2024-07-15 10:05:58.261031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.688 [2024-07-15 10:05:58.264001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.688 [2024-07-15 10:05:58.274013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.688 [2024-07-15 10:05:58.274429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.688 [2024-07-15 10:05:58.274468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.688 [2024-07-15 10:05:58.274484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.688 [2024-07-15 10:05:58.274715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.688 [2024-07-15 10:05:58.274979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.688 [2024-07-15 10:05:58.274999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.688 [2024-07-15 10:05:58.275012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.688 [2024-07-15 10:05:58.278548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.688 [2024-07-15 10:05:58.288019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.688 [2024-07-15 10:05:58.288445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.688 [2024-07-15 10:05:58.288475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.288493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.288736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.288997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.689 [2024-07-15 10:05:58.289017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.689 [2024-07-15 10:05:58.289030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.689 [2024-07-15 10:05:58.292589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.689 [2024-07-15 10:05:58.301847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.689 [2024-07-15 10:05:58.302289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.689 [2024-07-15 10:05:58.302317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.302347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.302602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.302844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.689 [2024-07-15 10:05:58.302867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.689 [2024-07-15 10:05:58.302893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.689 [2024-07-15 10:05:58.306437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.689 [2024-07-15 10:05:58.315687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.689 [2024-07-15 10:05:58.316131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.689 [2024-07-15 10:05:58.316159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.316174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.316429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.316671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.689 [2024-07-15 10:05:58.316695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.689 [2024-07-15 10:05:58.316709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.689 [2024-07-15 10:05:58.320278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.689 [2024-07-15 10:05:58.329516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.689 [2024-07-15 10:05:58.329938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.689 [2024-07-15 10:05:58.329969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.329987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.330225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.330466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.689 [2024-07-15 10:05:58.330489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.689 [2024-07-15 10:05:58.330504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.689 [2024-07-15 10:05:58.334069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.689 [2024-07-15 10:05:58.343486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.689 [2024-07-15 10:05:58.343924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.689 [2024-07-15 10:05:58.343956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.343974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.344211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.344452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.689 [2024-07-15 10:05:58.344475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.689 [2024-07-15 10:05:58.344490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.689 [2024-07-15 10:05:58.348045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.689 [2024-07-15 10:05:58.357497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.689 [2024-07-15 10:05:58.357997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.689 [2024-07-15 10:05:58.358028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.358046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.358284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.358525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.689 [2024-07-15 10:05:58.358548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.689 [2024-07-15 10:05:58.358562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.689 [2024-07-15 10:05:58.362113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.689 [2024-07-15 10:05:58.371382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.689 [2024-07-15 10:05:58.371828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.689 [2024-07-15 10:05:58.371856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.371896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.372149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.372400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.689 [2024-07-15 10:05:58.372424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.689 [2024-07-15 10:05:58.372438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.689 [2024-07-15 10:05:58.376007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.689 [2024-07-15 10:05:58.385255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.689 [2024-07-15 10:05:58.385686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.689 [2024-07-15 10:05:58.385722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.385741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.385991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.386211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.689 [2024-07-15 10:05:58.386235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.689 [2024-07-15 10:05:58.386250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.689 [2024-07-15 10:05:58.389812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.689 [2024-07-15 10:05:58.399243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.689 [2024-07-15 10:05:58.399668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.689 [2024-07-15 10:05:58.399698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.399716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.399976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.400175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.689 [2024-07-15 10:05:58.400211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.689 [2024-07-15 10:05:58.400226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.689 [2024-07-15 10:05:58.403792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.689 [2024-07-15 10:05:58.413228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.689 [2024-07-15 10:05:58.413638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.689 [2024-07-15 10:05:58.413668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.413686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.413948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.414168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.689 [2024-07-15 10:05:58.414187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.689 [2024-07-15 10:05:58.414199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.689 [2024-07-15 10:05:58.417800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.689 [2024-07-15 10:05:58.427251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.689 [2024-07-15 10:05:58.427684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.689 [2024-07-15 10:05:58.427726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.689 [2024-07-15 10:05:58.427741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.689 [2024-07-15 10:05:58.427995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.689 [2024-07-15 10:05:58.428224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.690 [2024-07-15 10:05:58.428248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.690 [2024-07-15 10:05:58.428263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.690 [2024-07-15 10:05:58.431830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.690 [2024-07-15 10:05:58.441273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.690 [2024-07-15 10:05:58.441684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.690 [2024-07-15 10:05:58.441715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.690 [2024-07-15 10:05:58.441733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.690 [2024-07-15 10:05:58.441994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.690 [2024-07-15 10:05:58.442227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.690 [2024-07-15 10:05:58.442251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.690 [2024-07-15 10:05:58.442266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.690 [2024-07-15 10:05:58.445831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.690 [2024-07-15 10:05:58.455258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.690 [2024-07-15 10:05:58.455703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.690 [2024-07-15 10:05:58.455729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.690 [2024-07-15 10:05:58.455760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.690 [2024-07-15 10:05:58.456030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.690 [2024-07-15 10:05:58.456247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.690 [2024-07-15 10:05:58.456270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.690 [2024-07-15 10:05:58.456286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.690 [2024-07-15 10:05:58.459847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.690 [2024-07-15 10:05:58.469291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.690 [2024-07-15 10:05:58.469741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.690 [2024-07-15 10:05:58.469768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.690 [2024-07-15 10:05:58.469798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.690 [2024-07-15 10:05:58.470087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.690 [2024-07-15 10:05:58.470337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.690 [2024-07-15 10:05:58.470361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.690 [2024-07-15 10:05:58.470376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.950 [2024-07-15 10:05:58.473965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.950 [2024-07-15 10:05:58.483207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.950 [2024-07-15 10:05:58.483658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.950 [2024-07-15 10:05:58.483690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.950 [2024-07-15 10:05:58.483707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.950 [2024-07-15 10:05:58.483958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.950 [2024-07-15 10:05:58.484200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.950 [2024-07-15 10:05:58.484223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.950 [2024-07-15 10:05:58.484238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.950 [2024-07-15 10:05:58.487782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.950 [2024-07-15 10:05:58.497238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.950 [2024-07-15 10:05:58.497666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.950 [2024-07-15 10:05:58.497697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.950 [2024-07-15 10:05:58.497714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.950 [2024-07-15 10:05:58.497972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.950 [2024-07-15 10:05:58.498191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.950 [2024-07-15 10:05:58.498211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.950 [2024-07-15 10:05:58.498239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.950 [2024-07-15 10:05:58.501798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.950 [2024-07-15 10:05:58.511234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.950 [2024-07-15 10:05:58.511678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.950 [2024-07-15 10:05:58.511719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.950 [2024-07-15 10:05:58.511735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.950 [2024-07-15 10:05:58.511980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.950 [2024-07-15 10:05:58.512196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.950 [2024-07-15 10:05:58.512220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.950 [2024-07-15 10:05:58.512235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.950 [2024-07-15 10:05:58.515797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.950 [2024-07-15 10:05:58.525245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.950 [2024-07-15 10:05:58.525647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.950 [2024-07-15 10:05:58.525679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.950 [2024-07-15 10:05:58.525702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.950 [2024-07-15 10:05:58.525965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.950 [2024-07-15 10:05:58.526164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.950 [2024-07-15 10:05:58.526183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.950 [2024-07-15 10:05:58.526212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.950 [2024-07-15 10:05:58.529775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.950 [2024-07-15 10:05:58.539210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.950 [2024-07-15 10:05:58.539629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.950 [2024-07-15 10:05:58.539659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.950 [2024-07-15 10:05:58.539677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.950 [2024-07-15 10:05:58.539925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.950 [2024-07-15 10:05:58.540146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.540180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.540192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.543765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.553208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.553651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.553699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.553717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.553975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.951 [2024-07-15 10:05:58.554194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.554213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.554241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.557805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.567241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.567682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.567729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.567746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.568015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.951 [2024-07-15 10:05:58.568237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.568266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.568282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.571844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.581117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.581517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.581547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.581565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.581803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.951 [2024-07-15 10:05:58.582050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.582070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.582083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.585638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.595111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.595544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.595575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.595593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.595830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.951 [2024-07-15 10:05:58.596068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.596088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.596100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.599662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.609120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.609531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.609562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.609579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.609817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.951 [2024-07-15 10:05:58.610062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.610081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.610093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.613652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.623108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.623543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.623574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.623593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.623830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.951 [2024-07-15 10:05:58.624068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.624088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.624100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.627639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.637066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.637475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.637506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.637524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.637762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.951 [2024-07-15 10:05:58.638017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.638042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.638057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.641591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.651027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.651447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.651478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.651495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.651733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.951 [2024-07-15 10:05:58.651983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.652007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.652023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.655560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.664999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.665400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.665431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.665449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.665691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.951 [2024-07-15 10:05:58.665943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.665967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.665982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.669508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.678949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.679358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.679388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.679406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.679643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.951 [2024-07-15 10:05:58.679894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.951 [2024-07-15 10:05:58.679918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.951 [2024-07-15 10:05:58.679933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.951 [2024-07-15 10:05:58.683477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.951 [2024-07-15 10:05:58.692921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.951 [2024-07-15 10:05:58.693354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.951 [2024-07-15 10:05:58.693396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.951 [2024-07-15 10:05:58.693412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.951 [2024-07-15 10:05:58.693666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.952 [2024-07-15 10:05:58.693930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.952 [2024-07-15 10:05:58.693950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.952 [2024-07-15 10:05:58.693962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.952 [2024-07-15 10:05:58.697487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.952 [2024-07-15 10:05:58.706742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.952 [2024-07-15 10:05:58.707155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.952 [2024-07-15 10:05:58.707182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.952 [2024-07-15 10:05:58.707214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.952 [2024-07-15 10:05:58.707452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.952 [2024-07-15 10:05:58.707694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.952 [2024-07-15 10:05:58.707717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.952 [2024-07-15 10:05:58.707738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.952 [2024-07-15 10:05:58.711263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.952 [2024-07-15 10:05:58.720617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.952 [2024-07-15 10:05:58.721053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.952 [2024-07-15 10:05:58.721084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:41.952 [2024-07-15 10:05:58.721102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:41.952 [2024-07-15 10:05:58.721340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:41.952 [2024-07-15 10:05:58.721582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:41.952 [2024-07-15 10:05:58.721605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:41.952 [2024-07-15 10:05:58.721620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:41.952 [2024-07-15 10:05:58.725168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.213 [2024-07-15 10:05:58.734600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.213 [2024-07-15 10:05:58.735007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.213 [2024-07-15 10:05:58.735039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.213 [2024-07-15 10:05:58.735057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.213 [2024-07-15 10:05:58.735294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.213 [2024-07-15 10:05:58.735536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.213 [2024-07-15 10:05:58.735559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.213 [2024-07-15 10:05:58.735575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.213 [2024-07-15 10:05:58.739136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.213 [2024-07-15 10:05:58.748826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.213 [2024-07-15 10:05:58.749308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.213 [2024-07-15 10:05:58.749339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.213 [2024-07-15 10:05:58.749357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.213 [2024-07-15 10:05:58.749594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.213 [2024-07-15 10:05:58.749836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.213 [2024-07-15 10:05:58.749859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.213 [2024-07-15 10:05:58.749873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.213 [2024-07-15 10:05:58.753422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.213 [2024-07-15 10:05:58.762672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.213 [2024-07-15 10:05:58.763115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.213 [2024-07-15 10:05:58.763146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.213 [2024-07-15 10:05:58.763164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.213 [2024-07-15 10:05:58.763401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.213 [2024-07-15 10:05:58.763642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.213 [2024-07-15 10:05:58.763666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.213 [2024-07-15 10:05:58.763680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.213 [2024-07-15 10:05:58.767282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.213 [2024-07-15 10:05:58.776555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.213 [2024-07-15 10:05:58.776974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.213 [2024-07-15 10:05:58.777016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.213 [2024-07-15 10:05:58.777034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.213 [2024-07-15 10:05:58.777273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.213 [2024-07-15 10:05:58.777514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.213 [2024-07-15 10:05:58.777537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.213 [2024-07-15 10:05:58.777552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.213 [2024-07-15 10:05:58.781140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.213 [2024-07-15 10:05:58.790411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.213 [2024-07-15 10:05:58.790858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.213 [2024-07-15 10:05:58.790917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.213 [2024-07-15 10:05:58.790936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.213 [2024-07-15 10:05:58.791173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.213 [2024-07-15 10:05:58.791414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.213 [2024-07-15 10:05:58.791437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.791452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.795020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.804314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.804763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.214 [2024-07-15 10:05:58.804813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.214 [2024-07-15 10:05:58.804830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.214 [2024-07-15 10:05:58.805078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.214 [2024-07-15 10:05:58.805326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.214 [2024-07-15 10:05:58.805350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.805365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.808936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.818215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.818694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.214 [2024-07-15 10:05:58.818724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.214 [2024-07-15 10:05:58.818742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.214 [2024-07-15 10:05:58.818990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.214 [2024-07-15 10:05:58.819232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.214 [2024-07-15 10:05:58.819256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.819271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.822834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.832116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.832539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.214 [2024-07-15 10:05:58.832570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.214 [2024-07-15 10:05:58.832587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.214 [2024-07-15 10:05:58.832824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.214 [2024-07-15 10:05:58.833078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.214 [2024-07-15 10:05:58.833102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.833118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.836682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.845965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.846400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.214 [2024-07-15 10:05:58.846431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.214 [2024-07-15 10:05:58.846449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.214 [2024-07-15 10:05:58.846686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.214 [2024-07-15 10:05:58.846938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.214 [2024-07-15 10:05:58.846962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.846977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.850545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.859813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.860244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.214 [2024-07-15 10:05:58.860275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.214 [2024-07-15 10:05:58.860293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.214 [2024-07-15 10:05:58.860531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.214 [2024-07-15 10:05:58.860772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.214 [2024-07-15 10:05:58.860795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.860810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.864382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.873648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.874096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.214 [2024-07-15 10:05:58.874127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.214 [2024-07-15 10:05:58.874145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.214 [2024-07-15 10:05:58.874382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.214 [2024-07-15 10:05:58.874624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.214 [2024-07-15 10:05:58.874647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.874661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.878233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.887501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.887939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.214 [2024-07-15 10:05:58.887970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.214 [2024-07-15 10:05:58.887987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.214 [2024-07-15 10:05:58.888225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.214 [2024-07-15 10:05:58.888466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.214 [2024-07-15 10:05:58.888490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.888505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.892079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.901349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.901773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.214 [2024-07-15 10:05:58.901804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.214 [2024-07-15 10:05:58.901827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.214 [2024-07-15 10:05:58.902077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.214 [2024-07-15 10:05:58.902319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.214 [2024-07-15 10:05:58.902342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.902357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.905927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.915194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.915599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.214 [2024-07-15 10:05:58.915630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.214 [2024-07-15 10:05:58.915647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.214 [2024-07-15 10:05:58.915895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.214 [2024-07-15 10:05:58.916137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.214 [2024-07-15 10:05:58.916160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.916175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.919737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.929227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.929653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.214 [2024-07-15 10:05:58.929684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.214 [2024-07-15 10:05:58.929701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.214 [2024-07-15 10:05:58.929950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.214 [2024-07-15 10:05:58.930192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.214 [2024-07-15 10:05:58.930215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.214 [2024-07-15 10:05:58.930230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.214 [2024-07-15 10:05:58.933792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.214 [2024-07-15 10:05:58.943089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.214 [2024-07-15 10:05:58.943489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.215 [2024-07-15 10:05:58.943520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.215 [2024-07-15 10:05:58.943538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.215 [2024-07-15 10:05:58.943775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.215 [2024-07-15 10:05:58.944032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.215 [2024-07-15 10:05:58.944057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.215 [2024-07-15 10:05:58.944071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.215 [2024-07-15 10:05:58.947634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.215 [2024-07-15 10:05:58.957121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.215 [2024-07-15 10:05:58.957545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.215 [2024-07-15 10:05:58.957576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.215 [2024-07-15 10:05:58.957593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.215 [2024-07-15 10:05:58.957831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.215 [2024-07-15 10:05:58.958082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.215 [2024-07-15 10:05:58.958106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.215 [2024-07-15 10:05:58.958121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.215 [2024-07-15 10:05:58.961683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.215 [2024-07-15 10:05:58.970961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.215 [2024-07-15 10:05:58.971388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.215 [2024-07-15 10:05:58.971419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.215 [2024-07-15 10:05:58.971437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.215 [2024-07-15 10:05:58.971674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.215 [2024-07-15 10:05:58.971928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.215 [2024-07-15 10:05:58.971952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.215 [2024-07-15 10:05:58.971966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.215 [2024-07-15 10:05:58.975530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.215 [2024-07-15 10:05:58.984800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.215 [2024-07-15 10:05:58.985188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.215 [2024-07-15 10:05:58.985220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.215 [2024-07-15 10:05:58.985237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.215 [2024-07-15 10:05:58.985475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.215 [2024-07-15 10:05:58.985716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.215 [2024-07-15 10:05:58.985739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.215 [2024-07-15 10:05:58.985754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.215 [2024-07-15 10:05:58.989322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.478 [2024-07-15 10:05:58.998823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.478 [2024-07-15 10:05:58.999257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.478 [2024-07-15 10:05:58.999289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.478 [2024-07-15 10:05:58.999307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.478 [2024-07-15 10:05:58.999545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.478 [2024-07-15 10:05:58.999787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.478 [2024-07-15 10:05:58.999810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.478 [2024-07-15 10:05:58.999824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.478 [2024-07-15 10:05:59.003404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.478 [2024-07-15 10:05:59.012669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.478 [2024-07-15 10:05:59.013111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.478 [2024-07-15 10:05:59.013142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.478 [2024-07-15 10:05:59.013159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.478 [2024-07-15 10:05:59.013396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.478 [2024-07-15 10:05:59.013637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.478 [2024-07-15 10:05:59.013660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.478 [2024-07-15 10:05:59.013676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.478 [2024-07-15 10:05:59.017249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.478 [2024-07-15 10:05:59.026512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.478 [2024-07-15 10:05:59.026933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.478 [2024-07-15 10:05:59.026965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.478 [2024-07-15 10:05:59.026983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.478 [2024-07-15 10:05:59.027219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.478 [2024-07-15 10:05:59.027460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.478 [2024-07-15 10:05:59.027484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.478 [2024-07-15 10:05:59.027498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.478 [2024-07-15 10:05:59.031070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.478 [2024-07-15 10:05:59.040357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.478 [2024-07-15 10:05:59.040904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.478 [2024-07-15 10:05:59.040936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.478 [2024-07-15 10:05:59.040960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.478 [2024-07-15 10:05:59.041199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.478 [2024-07-15 10:05:59.041441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.478 [2024-07-15 10:05:59.041465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.478 [2024-07-15 10:05:59.041479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.478 [2024-07-15 10:05:59.045077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.478 [2024-07-15 10:05:59.054363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.478 [2024-07-15 10:05:59.054790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.478 [2024-07-15 10:05:59.054821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.478 [2024-07-15 10:05:59.054839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.478 [2024-07-15 10:05:59.055085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.478 [2024-07-15 10:05:59.055328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.478 [2024-07-15 10:05:59.055351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.478 [2024-07-15 10:05:59.055366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.478 [2024-07-15 10:05:59.058966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.478 [2024-07-15 10:05:59.068241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.478 [2024-07-15 10:05:59.068797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.478 [2024-07-15 10:05:59.068853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.478 [2024-07-15 10:05:59.068871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.478 [2024-07-15 10:05:59.069118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.478 [2024-07-15 10:05:59.069360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.478 [2024-07-15 10:05:59.069383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.478 [2024-07-15 10:05:59.069398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.478 [2024-07-15 10:05:59.072971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.478 [2024-07-15 10:05:59.082249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.478 [2024-07-15 10:05:59.082805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.478 [2024-07-15 10:05:59.082856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.478 [2024-07-15 10:05:59.082873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.083122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.083363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.083392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.083407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.086985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.096262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.096699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.096730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.096748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.097007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.097249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.097272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.097287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.100851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.110128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.110504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.110535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.110552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.110790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.111041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.111065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.111080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.114644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.124128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.124528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.124559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.124576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.124814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.125066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.125090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.125105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.128666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.138149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.138582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.138613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.138630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.138867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.139120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.139143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.139158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.142719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.151986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.152410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.152440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.152458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.152696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.152951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.152975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.152989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.156553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.165823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.166273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.166304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.166321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.166559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.166800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.166823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.166838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.170407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.179670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.180105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.180136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.180153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.180396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.180638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.180661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.180676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.184248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.193540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.193939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.193971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.193988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.194226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.194467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.194490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.194504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.198082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.207572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.207964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.207996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.208013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.208250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.208491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.208515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.208529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.212109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.221411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.221837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.221868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.221898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.222137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.222378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.222401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.222422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.225999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.235309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.235718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.235749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.235767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.236017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.236260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.236283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.236298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.239883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.479 [2024-07-15 10:05:59.249170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.479 [2024-07-15 10:05:59.249681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.479 [2024-07-15 10:05:59.249732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.479 [2024-07-15 10:05:59.249750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.479 [2024-07-15 10:05:59.250010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.479 [2024-07-15 10:05:59.250254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.479 [2024-07-15 10:05:59.250277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.479 [2024-07-15 10:05:59.250292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.479 [2024-07-15 10:05:59.253856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.739 [2024-07-15 10:05:59.263139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.739 [2024-07-15 10:05:59.263574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.739 [2024-07-15 10:05:59.263605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.739 [2024-07-15 10:05:59.263622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.739 [2024-07-15 10:05:59.263860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.739 [2024-07-15 10:05:59.264112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.739 [2024-07-15 10:05:59.264135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.739 [2024-07-15 10:05:59.264151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.739 [2024-07-15 10:05:59.267713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.739 [2024-07-15 10:05:59.276998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.739 [2024-07-15 10:05:59.277425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.739 [2024-07-15 10:05:59.277461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.739 [2024-07-15 10:05:59.277479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.739 [2024-07-15 10:05:59.277717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.739 [2024-07-15 10:05:59.277969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.739 [2024-07-15 10:05:59.277993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.739 [2024-07-15 10:05:59.278007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.739 [2024-07-15 10:05:59.281569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.739 [2024-07-15 10:05:59.290833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.739 [2024-07-15 10:05:59.291247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.739 [2024-07-15 10:05:59.291278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.739 [2024-07-15 10:05:59.291296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.739 [2024-07-15 10:05:59.291533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.739 [2024-07-15 10:05:59.291774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.739 [2024-07-15 10:05:59.291797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.739 [2024-07-15 10:05:59.291812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.739 [2024-07-15 10:05:59.295381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.739 [2024-07-15 10:05:59.304655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.739 [2024-07-15 10:05:59.305089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.739 [2024-07-15 10:05:59.305121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.739 [2024-07-15 10:05:59.305138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.739 [2024-07-15 10:05:59.305375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.739 [2024-07-15 10:05:59.305617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.739 [2024-07-15 10:05:59.305649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.739 [2024-07-15 10:05:59.305664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.739 [2024-07-15 10:05:59.309234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.739 [2024-07-15 10:05:59.318503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.739 [2024-07-15 10:05:59.318928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.739 [2024-07-15 10:05:59.318959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.739 [2024-07-15 10:05:59.318977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.739 [2024-07-15 10:05:59.319214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.739 [2024-07-15 10:05:59.319465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.739 [2024-07-15 10:05:59.319489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.739 [2024-07-15 10:05:59.319504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.739 [2024-07-15 10:05:59.323076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.739 [2024-07-15 10:05:59.332355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.739 [2024-07-15 10:05:59.332767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.739 [2024-07-15 10:05:59.332798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.739 [2024-07-15 10:05:59.332828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.739 [2024-07-15 10:05:59.333085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.739 [2024-07-15 10:05:59.333329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.739 [2024-07-15 10:05:59.333352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.739 [2024-07-15 10:05:59.333367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.739 [2024-07-15 10:05:59.336935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.739 [2024-07-15 10:05:59.346208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.739 [2024-07-15 10:05:59.346646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.739 [2024-07-15 10:05:59.346677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.739 [2024-07-15 10:05:59.346694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.739 [2024-07-15 10:05:59.346942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.739 [2024-07-15 10:05:59.347184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.739 [2024-07-15 10:05:59.347207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.739 [2024-07-15 10:05:59.347222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.739 [2024-07-15 10:05:59.350784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.739 [2024-07-15 10:05:59.360048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.739 [2024-07-15 10:05:59.360452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.739 [2024-07-15 10:05:59.360483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.739 [2024-07-15 10:05:59.360500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.739 [2024-07-15 10:05:59.360737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.739 [2024-07-15 10:05:59.360988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.739 [2024-07-15 10:05:59.361012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.739 [2024-07-15 10:05:59.361027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.739 [2024-07-15 10:05:59.364591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.739 [2024-07-15 10:05:59.374076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.739 [2024-07-15 10:05:59.374510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.739 [2024-07-15 10:05:59.374541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.739 [2024-07-15 10:05:59.374558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.739 [2024-07-15 10:05:59.374795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.739 [2024-07-15 10:05:59.375046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.739 [2024-07-15 10:05:59.375070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.739 [2024-07-15 10:05:59.375085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.739 [2024-07-15 10:05:59.378643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.739 [2024-07-15 10:05:59.387928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.739 [2024-07-15 10:05:59.388327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.739 [2024-07-15 10:05:59.388358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.739 [2024-07-15 10:05:59.388375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.739 [2024-07-15 10:05:59.388612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.739 [2024-07-15 10:05:59.388853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.739 [2024-07-15 10:05:59.388885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.740 [2024-07-15 10:05:59.388903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.740 [2024-07-15 10:05:59.392472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.740 [2024-07-15 10:05:59.401962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.740 [2024-07-15 10:05:59.402408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.740 [2024-07-15 10:05:59.402438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.740 [2024-07-15 10:05:59.402456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.740 [2024-07-15 10:05:59.402693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.740 [2024-07-15 10:05:59.402946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.740 [2024-07-15 10:05:59.402970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.740 [2024-07-15 10:05:59.402986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.740 [2024-07-15 10:05:59.406546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.740 [2024-07-15 10:05:59.415803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.740 [2024-07-15 10:05:59.416242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.740 [2024-07-15 10:05:59.416273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.740 [2024-07-15 10:05:59.416298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.740 [2024-07-15 10:05:59.416537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.740 [2024-07-15 10:05:59.416778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.740 [2024-07-15 10:05:59.416801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.740 [2024-07-15 10:05:59.416816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.740 [2024-07-15 10:05:59.420385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.740 [2024-07-15 10:05:59.429648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.740 [2024-07-15 10:05:59.430041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.740 [2024-07-15 10:05:59.430072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.740 [2024-07-15 10:05:59.430089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.740 [2024-07-15 10:05:59.430326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.740 [2024-07-15 10:05:59.430568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.740 [2024-07-15 10:05:59.430591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.740 [2024-07-15 10:05:59.430606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.740 [2024-07-15 10:05:59.434178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.740 [2024-07-15 10:05:59.443664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.740 [2024-07-15 10:05:59.444100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.740 [2024-07-15 10:05:59.444131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.740 [2024-07-15 10:05:59.444148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.740 [2024-07-15 10:05:59.444385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.740 [2024-07-15 10:05:59.444625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.740 [2024-07-15 10:05:59.444648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.740 [2024-07-15 10:05:59.444663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.740 [2024-07-15 10:05:59.448237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.740 [2024-07-15 10:05:59.457500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.740 [2024-07-15 10:05:59.457965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.740 [2024-07-15 10:05:59.458060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.740 [2024-07-15 10:05:59.458078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.740 [2024-07-15 10:05:59.458316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.740 [2024-07-15 10:05:59.458557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.740 [2024-07-15 10:05:59.458585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.740 [2024-07-15 10:05:59.458601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.740 [2024-07-15 10:05:59.462176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.740 [2024-07-15 10:05:59.471449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.740 [2024-07-15 10:05:59.471863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.740 [2024-07-15 10:05:59.471902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.740 [2024-07-15 10:05:59.471920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.740 [2024-07-15 10:05:59.472158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.740 [2024-07-15 10:05:59.472400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.740 [2024-07-15 10:05:59.472423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.740 [2024-07-15 10:05:59.472438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.740 [2024-07-15 10:05:59.476012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.740 [2024-07-15 10:05:59.485285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.740 [2024-07-15 10:05:59.485708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.740 [2024-07-15 10:05:59.485739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.740 [2024-07-15 10:05:59.485756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.740 [2024-07-15 10:05:59.486004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.740 [2024-07-15 10:05:59.486246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.740 [2024-07-15 10:05:59.486269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.740 [2024-07-15 10:05:59.486284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.740 [2024-07-15 10:05:59.489844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.740 [2024-07-15 10:05:59.499113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.740 [2024-07-15 10:05:59.499517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.740 [2024-07-15 10:05:59.499547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.740 [2024-07-15 10:05:59.499565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.740 [2024-07-15 10:05:59.499803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.740 [2024-07-15 10:05:59.500053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.740 [2024-07-15 10:05:59.500078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.740 [2024-07-15 10:05:59.500093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.740 [2024-07-15 10:05:59.503651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.740 [2024-07-15 10:05:59.513135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.740 [2024-07-15 10:05:59.513580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.740 [2024-07-15 10:05:59.513610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.740 [2024-07-15 10:05:59.513628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.740 [2024-07-15 10:05:59.513865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.740 [2024-07-15 10:05:59.514116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.740 [2024-07-15 10:05:59.514139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.740 [2024-07-15 10:05:59.514155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.740 [2024-07-15 10:05:59.517715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.999 [2024-07-15 10:05:59.526988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.999 [2024-07-15 10:05:59.527419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.999 [2024-07-15 10:05:59.527449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.999 [2024-07-15 10:05:59.527467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.999 [2024-07-15 10:05:59.527705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.999 [2024-07-15 10:05:59.527957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.999 [2024-07-15 10:05:59.527981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.999 [2024-07-15 10:05:59.527996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.999 [2024-07-15 10:05:59.531556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.999 [2024-07-15 10:05:59.540818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.999 [2024-07-15 10:05:59.541231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.999 [2024-07-15 10:05:59.541262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.999 [2024-07-15 10:05:59.541280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.999 [2024-07-15 10:05:59.541516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.999 [2024-07-15 10:05:59.541758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.999 [2024-07-15 10:05:59.541781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.999 [2024-07-15 10:05:59.541795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.999 [2024-07-15 10:05:59.545365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.999 [2024-07-15 10:05:59.554833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.999 [2024-07-15 10:05:59.555262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.999 [2024-07-15 10:05:59.555294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.999 [2024-07-15 10:05:59.555316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.999 [2024-07-15 10:05:59.555554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.999 [2024-07-15 10:05:59.555795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.999 [2024-07-15 10:05:59.555819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.999 [2024-07-15 10:05:59.555833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.999 [2024-07-15 10:05:59.559400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.999 [2024-07-15 10:05:59.568661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.999 [2024-07-15 10:05:59.569101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.999 [2024-07-15 10:05:59.569133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.999 [2024-07-15 10:05:59.569151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.999 [2024-07-15 10:05:59.569388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.999 [2024-07-15 10:05:59.569629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.999 [2024-07-15 10:05:59.569652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.999 [2024-07-15 10:05:59.569667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.999 [2024-07-15 10:05:59.573233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.999 [2024-07-15 10:05:59.582490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.999 [2024-07-15 10:05:59.582914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.999 [2024-07-15 10:05:59.582946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.999 [2024-07-15 10:05:59.582963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.999 [2024-07-15 10:05:59.583201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.999 [2024-07-15 10:05:59.583441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.999 [2024-07-15 10:05:59.583464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.999 [2024-07-15 10:05:59.583479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.999 [2024-07-15 10:05:59.587049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.999 [2024-07-15 10:05:59.596514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.999 [2024-07-15 10:05:59.596918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.999 [2024-07-15 10:05:59.596950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.999 [2024-07-15 10:05:59.596968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.999 [2024-07-15 10:05:59.597206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.999 [2024-07-15 10:05:59.597447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.999 [2024-07-15 10:05:59.597470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.999 [2024-07-15 10:05:59.597491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.999 [2024-07-15 10:05:59.601060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.999 [2024-07-15 10:05:59.610526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.999 [2024-07-15 10:05:59.610916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.999 [2024-07-15 10:05:59.610948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.999 [2024-07-15 10:05:59.610965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:42.999 [2024-07-15 10:05:59.611203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:42.999 [2024-07-15 10:05:59.611444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:42.999 [2024-07-15 10:05:59.611467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:42.999 [2024-07-15 10:05:59.611481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:42.999 [2024-07-15 10:05:59.615053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.999 [2024-07-15 10:05:59.624516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:42.999 [2024-07-15 10:05:59.624948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.999 [2024-07-15 10:05:59.624978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:42.999 [2024-07-15 10:05:59.624996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.000 [2024-07-15 10:05:59.625233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.000 [2024-07-15 10:05:59.625474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.000 [2024-07-15 10:05:59.625498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.000 [2024-07-15 10:05:59.625513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.000 [2024-07-15 10:05:59.629082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.000 [2024-07-15 10:05:59.638378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.000 [2024-07-15 10:05:59.638779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.000 [2024-07-15 10:05:59.638810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.000 [2024-07-15 10:05:59.638827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.000 [2024-07-15 10:05:59.639075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.000 [2024-07-15 10:05:59.639317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.000 [2024-07-15 10:05:59.639340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.000 [2024-07-15 10:05:59.639355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.000 [2024-07-15 10:05:59.642920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.000 [2024-07-15 10:05:59.652386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.000 [2024-07-15 10:05:59.652802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.000 [2024-07-15 10:05:59.652833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.000 [2024-07-15 10:05:59.652851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.000 [2024-07-15 10:05:59.653098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.000 [2024-07-15 10:05:59.653340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.000 [2024-07-15 10:05:59.653363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.000 [2024-07-15 10:05:59.653378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.000 [2024-07-15 10:05:59.656944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.000 [2024-07-15 10:05:59.666411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.000 [2024-07-15 10:05:59.666839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.000 [2024-07-15 10:05:59.666870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.000 [2024-07-15 10:05:59.666898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.000 [2024-07-15 10:05:59.667136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.000 [2024-07-15 10:05:59.667378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.000 [2024-07-15 10:05:59.667401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.000 [2024-07-15 10:05:59.667416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.000 [2024-07-15 10:05:59.670979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.000 [2024-07-15 10:05:59.680237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.000 [2024-07-15 10:05:59.680660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.000 [2024-07-15 10:05:59.680691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.000 [2024-07-15 10:05:59.680708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.000 [2024-07-15 10:05:59.680956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.000 [2024-07-15 10:05:59.681199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.000 [2024-07-15 10:05:59.681222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.000 [2024-07-15 10:05:59.681237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.000 [2024-07-15 10:05:59.684795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.000 [2024-07-15 10:05:59.694266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2052037 Killed "${NVMF_APP[@]}" "$@" 00:32:43.000 [2024-07-15 10:05:59.694693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.000 [2024-07-15 10:05:59.694723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.000 [2024-07-15 10:05:59.694746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:43.000 [2024-07-15 10:05:59.694995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:43.000 [2024-07-15 10:05:59.695237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.000 [2024-07-15 10:05:59.695260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.000 [2024-07-15 10:05:59.695276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.000 [2024-07-15 10:05:59.698834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2052994 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2052994 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2052994 ']' 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:43.000 10:05:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.000 [2024-07-15 10:05:59.708109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.000 [2024-07-15 10:05:59.708534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.000 [2024-07-15 10:05:59.708565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.000 [2024-07-15 10:05:59.708583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.000 [2024-07-15 10:05:59.708820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.000 [2024-07-15 10:05:59.709071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.000 [2024-07-15 10:05:59.709095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.000 [2024-07-15 10:05:59.709109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.000 [2024-07-15 10:05:59.712671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.000 [2024-07-15 10:05:59.721939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.000 [2024-07-15 10:05:59.722368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.000 [2024-07-15 10:05:59.722399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.000 [2024-07-15 10:05:59.722416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.000 [2024-07-15 10:05:59.722653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.000 [2024-07-15 10:05:59.722910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.000 [2024-07-15 10:05:59.722933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.000 [2024-07-15 10:05:59.722948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.000 [2024-07-15 10:05:59.726509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.000 [2024-07-15 10:05:59.735776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.000 [2024-07-15 10:05:59.736210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.000 [2024-07-15 10:05:59.736241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.000 [2024-07-15 10:05:59.736259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.000 [2024-07-15 10:05:59.736496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.000 [2024-07-15 10:05:59.736737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.000 [2024-07-15 10:05:59.736761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.000 [2024-07-15 10:05:59.736776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.000 [2024-07-15 10:05:59.740347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.000 [2024-07-15 10:05:59.748209] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:43.000 [2024-07-15 10:05:59.748278] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.000 [2024-07-15 10:05:59.749615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.000 [2024-07-15 10:05:59.750046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.000 [2024-07-15 10:05:59.750077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.000 [2024-07-15 10:05:59.750094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.000 [2024-07-15 10:05:59.750333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.000 [2024-07-15 10:05:59.750574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.001 [2024-07-15 10:05:59.750597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.001 [2024-07-15 10:05:59.750612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.001 [2024-07-15 10:05:59.754180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.001 [2024-07-15 10:05:59.763479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.001 [2024-07-15 10:05:59.763915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.001 [2024-07-15 10:05:59.763946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.001 [2024-07-15 10:05:59.763964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.001 [2024-07-15 10:05:59.764201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.001 [2024-07-15 10:05:59.764448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.001 [2024-07-15 10:05:59.764472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.001 [2024-07-15 10:05:59.764487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.001 [2024-07-15 10:05:59.768230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.001 [2024-07-15 10:05:59.777497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.001 [2024-07-15 10:05:59.777930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.001 [2024-07-15 10:05:59.777961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.001 [2024-07-15 10:05:59.777979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.001 [2024-07-15 10:05:59.778217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.001 [2024-07-15 10:05:59.778458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.001 [2024-07-15 10:05:59.778481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.001 [2024-07-15 10:05:59.778496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.001 [2024-07-15 10:05:59.782068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.259 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.259 [2024-07-15 10:05:59.791335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.259 [2024-07-15 10:05:59.791734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.259 [2024-07-15 10:05:59.791765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.259 [2024-07-15 10:05:59.791783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.259 [2024-07-15 10:05:59.792031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.259 [2024-07-15 10:05:59.792273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.259 [2024-07-15 10:05:59.792296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.259 [2024-07-15 10:05:59.792311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.259 [2024-07-15 10:05:59.794985] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:43.259 [2024-07-15 10:05:59.795888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.259 [2024-07-15 10:05:59.805363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.259 [2024-07-15 10:05:59.805797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.259 [2024-07-15 10:05:59.805828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.259 [2024-07-15 10:05:59.805846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.259 [2024-07-15 10:05:59.806091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.259 [2024-07-15 10:05:59.806333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.259 [2024-07-15 10:05:59.806357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.259 [2024-07-15 10:05:59.806378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.259 [2024-07-15 10:05:59.809944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.259 [2024-07-15 10:05:59.819204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.259 [2024-07-15 10:05:59.819632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.259 [2024-07-15 10:05:59.819662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.259 [2024-07-15 10:05:59.819680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.259 [2024-07-15 10:05:59.819929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.259 [2024-07-15 10:05:59.820171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.259 [2024-07-15 10:05:59.820195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.259 [2024-07-15 10:05:59.820210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.259 [2024-07-15 10:05:59.823767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.259 [2024-07-15 10:05:59.826169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:43.259 [2024-07-15 10:05:59.833090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.259 [2024-07-15 10:05:59.833580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.259 [2024-07-15 10:05:59.833615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.259 [2024-07-15 10:05:59.833635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.259 [2024-07-15 10:05:59.833887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.259 [2024-07-15 10:05:59.834132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.259 [2024-07-15 10:05:59.834156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.259 [2024-07-15 10:05:59.834172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.260 [2024-07-15 10:05:59.837771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.260 [2024-07-15 10:05:59.847076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.260 [2024-07-15 10:05:59.847631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.260 [2024-07-15 10:05:59.847670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.260 [2024-07-15 10:05:59.847692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.260 [2024-07-15 10:05:59.847950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.260 [2024-07-15 10:05:59.848196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.260 [2024-07-15 10:05:59.848220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.260 [2024-07-15 10:05:59.848238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.260 [2024-07-15 10:05:59.851800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.260 [2024-07-15 10:05:59.861076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.260 [2024-07-15 10:05:59.861499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.260 [2024-07-15 10:05:59.861531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.260 [2024-07-15 10:05:59.861549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.260 [2024-07-15 10:05:59.861788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.260 [2024-07-15 10:05:59.862041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.260 [2024-07-15 10:05:59.862066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.260 [2024-07-15 10:05:59.862081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.260 [2024-07-15 10:05:59.865642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.260 [2024-07-15 10:05:59.875133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.260 [2024-07-15 10:05:59.875589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.260 [2024-07-15 10:05:59.875621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.260 [2024-07-15 10:05:59.875639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.260 [2024-07-15 10:05:59.875890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.260 [2024-07-15 10:05:59.876134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.260 [2024-07-15 10:05:59.876158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.260 [2024-07-15 10:05:59.876174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.260 [2024-07-15 10:05:59.879737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.260 [2024-07-15 10:05:59.889033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.260 [2024-07-15 10:05:59.889605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.260 [2024-07-15 10:05:59.889647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.260 [2024-07-15 10:05:59.889667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.260 [2024-07-15 10:05:59.889925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.260 [2024-07-15 10:05:59.890174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.260 [2024-07-15 10:05:59.890198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.260 [2024-07-15 10:05:59.890216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.260 [2024-07-15 10:05:59.893777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.260 [2024-07-15 10:05:59.903065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.260 [2024-07-15 10:05:59.903485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.260 [2024-07-15 10:05:59.903517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.260 [2024-07-15 10:05:59.903535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.260 [2024-07-15 10:05:59.903784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.260 [2024-07-15 10:05:59.904038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.260 [2024-07-15 10:05:59.904063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.260 [2024-07-15 10:05:59.904077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.260 [2024-07-15 10:05:59.907638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.260 [2024-07-15 10:05:59.916909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.260 [2024-07-15 10:05:59.917337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.260 [2024-07-15 10:05:59.917369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.260 [2024-07-15 10:05:59.917388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.260 [2024-07-15 10:05:59.917627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.260 [2024-07-15 10:05:59.917869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.260 [2024-07-15 10:05:59.917902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.260 [2024-07-15 10:05:59.917919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.260 [2024-07-15 10:05:59.921127] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.260 [2024-07-15 10:05:59.921172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.260 [2024-07-15 10:05:59.921188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.260 [2024-07-15 10:05:59.921202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.260 [2024-07-15 10:05:59.921214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.260 [2024-07-15 10:05:59.921301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:43.260 [2024-07-15 10:05:59.921356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:43.260 [2024-07-15 10:05:59.921360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.260 [2024-07-15 10:05:59.921480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.260 [2024-07-15 10:05:59.930764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.260 [2024-07-15 10:05:59.931371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.260 [2024-07-15 10:05:59.931411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.260 [2024-07-15 10:05:59.931432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.260 [2024-07-15 10:05:59.931677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.260 [2024-07-15 10:05:59.931934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.260 [2024-07-15 10:05:59.931959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.260 [2024-07-15 10:05:59.931977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.260 [2024-07-15 10:05:59.935557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.260 [2024-07-15 10:05:59.944649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.260 [2024-07-15 10:05:59.945275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.260 [2024-07-15 10:05:59.945316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.260 [2024-07-15 10:05:59.945336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.260 [2024-07-15 10:05:59.945583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.260 [2024-07-15 10:05:59.945829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.261 [2024-07-15 10:05:59.945853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.261 [2024-07-15 10:05:59.945870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.261 [2024-07-15 10:05:59.949450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.261 [2024-07-15 10:05:59.958735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.261 [2024-07-15 10:05:59.959360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.261 [2024-07-15 10:05:59.959403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.261 [2024-07-15 10:05:59.959425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.261 [2024-07-15 10:05:59.959673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.261 [2024-07-15 10:05:59.959931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.261 [2024-07-15 10:05:59.959956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.261 [2024-07-15 10:05:59.959973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.261 [2024-07-15 10:05:59.963536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.261 [2024-07-15 10:05:59.972617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.261 [2024-07-15 10:05:59.973186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.261 [2024-07-15 10:05:59.973227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.261 [2024-07-15 10:05:59.973247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.261 [2024-07-15 10:05:59.973495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.261 [2024-07-15 10:05:59.973741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.261 [2024-07-15 10:05:59.973765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.261 [2024-07-15 10:05:59.973782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.261 [2024-07-15 10:05:59.977351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.261 [2024-07-15 10:05:59.986623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.261 [2024-07-15 10:05:59.987184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.261 [2024-07-15 10:05:59.987223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.261 [2024-07-15 10:05:59.987243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.261 [2024-07-15 10:05:59.987504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.261 [2024-07-15 10:05:59.987749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.261 [2024-07-15 10:05:59.987774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.261 [2024-07-15 10:05:59.987792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.261 [2024-07-15 10:05:59.991365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.261 [2024-07-15 10:06:00.000654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.261 [2024-07-15 10:06:00.001244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.261 [2024-07-15 10:06:00.001296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.261 [2024-07-15 10:06:00.001317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.261 [2024-07-15 10:06:00.001565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.261 [2024-07-15 10:06:00.001810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.261 [2024-07-15 10:06:00.001834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.261 [2024-07-15 10:06:00.001851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.261 [2024-07-15 10:06:00.005425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.261 [2024-07-15 10:06:00.014747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.261 [2024-07-15 10:06:00.015274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.261 [2024-07-15 10:06:00.015319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.261 [2024-07-15 10:06:00.015352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.261 [2024-07-15 10:06:00.015667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.261 [2024-07-15 10:06:00.015999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.261 [2024-07-15 10:06:00.016031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.261 [2024-07-15 10:06:00.016054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.261 [2024-07-15 10:06:00.019871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.261 [2024-07-15 10:06:00.028400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.261 [2024-07-15 10:06:00.028856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.261 [2024-07-15 10:06:00.028897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.261 [2024-07-15 10:06:00.028928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.261 [2024-07-15 10:06:00.029183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.261 [2024-07-15 10:06:00.029423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.261 [2024-07-15 10:06:00.029447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.261 [2024-07-15 10:06:00.029481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.261 [2024-07-15 10:06:00.032729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.261 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:43.261 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:32:43.261 10:06:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:43.261 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:43.261 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.261 [2024-07-15 10:06:00.041984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.261 [2024-07-15 10:06:00.042440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.519 [2024-07-15 10:06:00.042472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.519 [2024-07-15 10:06:00.042500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.519 [2024-07-15 10:06:00.042791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.519 [2024-07-15 10:06:00.043042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.519 [2024-07-15 10:06:00.043067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.519 [2024-07-15 10:06:00.043090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.520 [2024-07-15 10:06:00.046465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.520 [2024-07-15 10:06:00.055590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.520 [2024-07-15 10:06:00.056019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.520 [2024-07-15 10:06:00.056052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.520 [2024-07-15 10:06:00.056080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.520 [2024-07-15 10:06:00.056333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.520 [2024-07-15 10:06:00.056572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error 10:06:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.520 state 00:32:43.520 [2024-07-15 10:06:00.056603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.520 [2024-07-15 10:06:00.056626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.520 [2024-07-15 10:06:00.059837] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.520 [2024-07-15 10:06:00.059929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.520 [2024-07-15 10:06:00.069327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.520 [2024-07-15 10:06:00.069764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.520 [2024-07-15 10:06:00.069796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.520 [2024-07-15 10:06:00.069823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.520 [2024-07-15 10:06:00.070090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.520 [2024-07-15 10:06:00.070353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.520 [2024-07-15 10:06:00.070375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.520 [2024-07-15 10:06:00.070397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.520 [2024-07-15 10:06:00.073690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.520 [2024-07-15 10:06:00.082923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.520 [2024-07-15 10:06:00.083409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.520 [2024-07-15 10:06:00.083441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.520 [2024-07-15 10:06:00.083468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.520 [2024-07-15 10:06:00.083739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.520 [2024-07-15 10:06:00.083999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.520 [2024-07-15 10:06:00.084023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.520 [2024-07-15 10:06:00.084046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.520 [2024-07-15 10:06:00.087344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.520 [2024-07-15 10:06:00.096476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.520 [2024-07-15 10:06:00.097142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.520 [2024-07-15 10:06:00.097193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.520 [2024-07-15 10:06:00.097224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.520 [2024-07-15 10:06:00.097519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.520 [2024-07-15 10:06:00.097746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.520 [2024-07-15 10:06:00.097769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.520 [2024-07-15 10:06:00.097793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.520 [2024-07-15 10:06:00.101131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.520 Malloc0 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.520 [2024-07-15 10:06:00.110074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.520 [2024-07-15 10:06:00.110535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.520 [2024-07-15 10:06:00.110566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199fb50 with addr=10.0.0.2, port=4420 00:32:43.520 [2024-07-15 10:06:00.110593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fb50 is same with the state(5) to be set 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.520 [2024-07-15 10:06:00.110845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199fb50 (9): Bad file descriptor 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.520 [2024-07-15 10:06:00.111103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.520 [2024-07-15 10:06:00.111128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.520 [2024-07-15 10:06:00.111150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.520 [2024-07-15 10:06:00.114502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.520 [2024-07-15 10:06:00.122559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.520 [2024-07-15 10:06:00.123677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.520 10:06:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2052327 00:32:43.520 [2024-07-15 10:06:00.160947] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:53.545 00:32:53.545 Latency(us) 00:32:53.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.545 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:53.545 Verification LBA range: start 0x0 length 0x4000 00:32:53.545 Nvme1n1 : 15.00 6654.06 25.99 8973.38 0.00 8166.42 703.91 16214.09 00:32:53.545 =================================================================================================================== 00:32:53.545 Total : 6654.06 25.99 8973.38 0.00 8166.42 703.91 16214.09 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:53.545 rmmod nvme_tcp 00:32:53.545 rmmod nvme_fabrics 00:32:53.545 rmmod nvme_keyring 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2052994 ']' 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2052994 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2052994 ']' 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2052994 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2052994 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2052994' 00:32:53.545 killing process with pid 2052994 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2052994 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2052994 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:53.545 10:06:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.451 10:06:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:55.451 00:32:55.451 real 0m22.473s 00:32:55.451 user 1m0.729s 00:32:55.451 sys 0m4.014s 00:32:55.451 10:06:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:55.451 10:06:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.451 ************************************ 00:32:55.451 END TEST nvmf_bdevperf 00:32:55.451 ************************************ 00:32:55.451 10:06:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:55.451 10:06:11 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:55.451 10:06:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:55.451 10:06:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:55.451 10:06:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:55.451 ************************************ 00:32:55.451 START TEST nvmf_target_disconnect 00:32:55.451 ************************************ 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:55.451 * Looking for test storage... 00:32:55.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:55.451 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:55.452 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.452 10:06:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:55.452 10:06:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.452 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:55.452 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:55.452 10:06:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:32:55.452 10:06:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:57.355 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.355 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:32:57.355 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:57.355 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:57.355 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:57.355 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:57.355 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:57.355 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:57.356 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:57.356 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:57.356 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:57.356 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:57.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:32:57.356 00:32:57.356 --- 10.0.0.2 ping statistics --- 00:32:57.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.356 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:32:57.356 00:32:57.356 --- 10.0.0.1 ping statistics --- 00:32:57.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.356 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:57.356 10:06:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:57.356 ************************************ 00:32:57.356 START TEST nvmf_target_disconnect_tc1 00:32:57.356 ************************************ 00:32:57.356 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:32:57.356 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:57.356 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:32:57.356 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:57.356 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:57.356 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:57.356 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:57.356 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:57.357 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.357 [2024-07-15 10:06:14.097770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.357 [2024-07-15 10:06:14.097848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20183e0 with addr=10.0.0.2, port=4420 00:32:57.357 [2024-07-15 10:06:14.097895] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:57.357 [2024-07-15 10:06:14.097923] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:57.357 [2024-07-15 10:06:14.097952] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:57.357 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:57.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:57.357 Initializing NVMe Controllers 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:57.357 00:32:57.357 real 0m0.092s 00:32:57.357 user 0m0.038s 00:32:57.357 sys 0m0.053s 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:57.357 ************************************ 00:32:57.357 END TEST nvmf_target_disconnect_tc1 00:32:57.357 ************************************ 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:57.357 10:06:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:57.619 ************************************ 00:32:57.619 START TEST nvmf_target_disconnect_tc2 00:32:57.619 ************************************ 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2056758 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2056758 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2056758 ']' 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.619 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.619 [2024-07-15 10:06:14.203884] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:57.619 [2024-07-15 10:06:14.204001] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.619 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.619 [2024-07-15 10:06:14.243916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:57.619 [2024-07-15 10:06:14.270596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:57.619 [2024-07-15 10:06:14.361913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.619 [2024-07-15 10:06:14.361975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.619 [2024-07-15 10:06:14.362000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.619 [2024-07-15 10:06:14.362011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.619 [2024-07-15 10:06:14.362021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.619 [2024-07-15 10:06:14.362109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:32:57.619 [2024-07-15 10:06:14.362158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:32:57.619 [2024-07-15 10:06:14.362206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:32:57.619 [2024-07-15 10:06:14.362208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.879 Malloc0 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.879 [2024-07-15 10:06:14.541175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.879 [2024-07-15 10:06:14.569451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2056787 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:57.879 10:06:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:57.879 EAL: No free 2048 kB hugepages reported on node 1 00:32:59.815 10:06:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2056758 00:32:59.815 10:06:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 [2024-07-15 10:06:16.596115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Write completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 Read completed with error (sct=0, sc=8) 00:32:59.815 starting I/O failed 00:32:59.815 [2024-07-15 10:06:16.596397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.815 [2024-07-15 10:06:16.596599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.815 [2024-07-15 10:06:16.596628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.815 qpair failed and we were unable to recover it. 00:32:59.815 [2024-07-15 10:06:16.596776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.815 [2024-07-15 10:06:16.596804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.815 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.596987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.597013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.597131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.597164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.597295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.597321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.597471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.597501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.597658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.597683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.597910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.597941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.598062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.598088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.598242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.598267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.598407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.598432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.598559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.598584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.598793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.598817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:32:59.816 [2024-07-15 10:06:16.598961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.816 [2024-07-15 10:06:16.598988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:32:59.816 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.599116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.599154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.599302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.599328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.599448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.599473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.599598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.599623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.599780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.599805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.600034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.600060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.600207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.600232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.600361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.600386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.600535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.600560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.600712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.600740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.600912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.600941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.601099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.601124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.601343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.601369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.601515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.601540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.601719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.601760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.601949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.601975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.602100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.602125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.602270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.602296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.602440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.602465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.602591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.602617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.602839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.602864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.603062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.603088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.603266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.603291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.603432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.603457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.603629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.603654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.603801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.603826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.603937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.603963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.604092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.604117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.604241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.604266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.604387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.604413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.604536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.604561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.604713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.604738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.604856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.096 [2024-07-15 10:06:16.604892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.096 qpair failed and we were unable to recover it. 00:33:00.096 [2024-07-15 10:06:16.605071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.605096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.605243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.605268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.605442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.605467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.605593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.605618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.605771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.605796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.605972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.605998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.606146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.606189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.606380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.606405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.606584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.606609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.606755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.606780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.606928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.606954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.607176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.607201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.607374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.607399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.607526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.607551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.607717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.607742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.607911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.607939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.608082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.608106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.608255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.608280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.608427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.608453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.608606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.608631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.608783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.608808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.609035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.609061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.609232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.609258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.609430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.609455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.609627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.609652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.609800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.609825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.609939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.609969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.610148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.610174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.610316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.610341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.610511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.610551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.610778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.610803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.610924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.610950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.611101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.611126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.611276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.611301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.611461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.611488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.611607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.611633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.611806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.611831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.612062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.612087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.097 [2024-07-15 10:06:16.612236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.097 [2024-07-15 10:06:16.612261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.097 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.612428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.612456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.612599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.612624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.612765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.612791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.613015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.613041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.613187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.613213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.613340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.613365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.613541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.613566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.613790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.613816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.614048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.614074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.614225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.614250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.614373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.614399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.614545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.614570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.614714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.614739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.614887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.614913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.615055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.615080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.615280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.615308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.615480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.615506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.615676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.615701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.615846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.615871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.616001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.616026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.616201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.616226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.616371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.616396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.616542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.616568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.616706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.616731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.616887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.616913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.617039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.617064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.617210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.617235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.617360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.617385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.617606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.617635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.617797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.617825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.618020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.618046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.618191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.618215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.618359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.618384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.618527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.618552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.618697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.618722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.618868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.618900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.619053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.619079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.619193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.619218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.619392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.619417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.098 [2024-07-15 10:06:16.619568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.098 [2024-07-15 10:06:16.619611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.098 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.619795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.619820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.620043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.620085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.620276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.620301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.620446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.620471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.620617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.620642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.620815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.620840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.620969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.620995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.621142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.621167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.621314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.621339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.621561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.621586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.621795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.621820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.621936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.621962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.622147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.622173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.622281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.622323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.622491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.622519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.622701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.622732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.622911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.622938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.623052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.623077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.623192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.623217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.623339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.623364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.623509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.623534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.623709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.623734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.623910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.623936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.624111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.624135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.624273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.624299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.624471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.624496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.624615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.624640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.624788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.624814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.624938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.624964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.625107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.625132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.625276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.625301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.625428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.625453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.625591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.625616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.625734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.625758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.625906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.625932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.626066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.626092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.626236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.626261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.626411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.626437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.626563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.099 [2024-07-15 10:06:16.626588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.099 qpair failed and we were unable to recover it. 00:33:00.099 [2024-07-15 10:06:16.626716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.626741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.626884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.626910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.627077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.627106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.627267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.627292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.627441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.627467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.627585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.627610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.627832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.627857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.628054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.628083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.628247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.628272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.628419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.628444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.628561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.628586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.628756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.628780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.628898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.628925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.629073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.629099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.629270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.629295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.629473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.629498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.629644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.629669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.629816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.629845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.630002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.630029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.630172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.630197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.630314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.630339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.630466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.630492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.630663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.630688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.630858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.630891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.631037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.631062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.631233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.631258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.631467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.631492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.631612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.631638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.631808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.631833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.632026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.632052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.632194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.632219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.632394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.632419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.632569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.632594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.100 [2024-07-15 10:06:16.632735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.100 [2024-07-15 10:06:16.632761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.100 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.632905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.632931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.633083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.633108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.633256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.633281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.633430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.633456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.633601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.633627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.633745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.633770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.633919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.633944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.634114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.634139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.634279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.634305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.634450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.634475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.634617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.634642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.634791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.634817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.634936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.634960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.635075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.635100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.635269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.635294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.635465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.635490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.635640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.635665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.635806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.635831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.635980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.636005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.636123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.636148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.636267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.636294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.636420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.636446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.636593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.636618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.636763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.636789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.636959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.636985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.637136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.637162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.637304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.637330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.637498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.637526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.637671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.637698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.637814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.637840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.638006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.638035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.638184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.638212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.638387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.638414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.638539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.638565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.638714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.638741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.638888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.638914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.639062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.639090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.639233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.639258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.639436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.639464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.101 [2024-07-15 10:06:16.639609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.101 [2024-07-15 10:06:16.639636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.101 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.639780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.639807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.639959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.639984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.640130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.640158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.640303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.640330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.640476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.640504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.640728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.640755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.640913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.640944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.641106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.641133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.641279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.641306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.641433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.641458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.641572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.641597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.641747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.641777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.641963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.641991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.642164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.642191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.642329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.642356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.642509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.642537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.642659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.642684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.642860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.642896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.643048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.643076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.643223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.643250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.643389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.643415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.643587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.643614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.643789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.643816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.643969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.643996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.644149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.644176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.644362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.644389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.644536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.644563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.644786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.644813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.644968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.644994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.645141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.645168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.645392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.645419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.645591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.645618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.645739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.645764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.646006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.646037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.646227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.646254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.646401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.646426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.646548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.646575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.646748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.646775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.646892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.646918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.102 [2024-07-15 10:06:16.647049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.102 [2024-07-15 10:06:16.647075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.102 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.647255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.647281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.647426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.647454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.647622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.647648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.647774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.647799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.647956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.647984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.648134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.648161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.648312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.648339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.648488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.648515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.648670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.648696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.648845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.648872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.649102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.649130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.649278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.649305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.649458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.649488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.649614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.649640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.649789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.649816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.650041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.650069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.650186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.650213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.650326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.650352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.650544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.650571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.650710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.650737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.650932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.650959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.651084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.651110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.651261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.651288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.651429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.651456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.651629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.651656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.651858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.651892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.652078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.652105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.652252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.652279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.652424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.652451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.652595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.652620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.652762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.652789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.652940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.652968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.653096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.653121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.653262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.653289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.653440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.653484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.653683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.653710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.653934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.653963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.654131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.654161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.654323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.654353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.103 [2024-07-15 10:06:16.654520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.103 [2024-07-15 10:06:16.654550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.103 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.654709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.654739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.654886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.654916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.655089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.655116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.655340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.655366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.655493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.655519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.655690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.655717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.655947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.655977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.656169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.656197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.656321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.656346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.656520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.656547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.656701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.656728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.656872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.656909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.657082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.657112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.657318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.657345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.657492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.657520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.657690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.657717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.657858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.657918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.658085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.658112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.658240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.658265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.658404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.658430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.658549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.658574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.658754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.658781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.658931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.658960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.659110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.659137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.659286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.659312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.659517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.659543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.659662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.659687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.659837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.659864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.659990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.660015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.660160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.660187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.660336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.660363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.660503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.660546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.660713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.660740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.660861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.660910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.661072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.661101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.661285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.661311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.661460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.661488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.104 [2024-07-15 10:06:16.661627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.104 [2024-07-15 10:06:16.661654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.104 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.661814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.661841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.661983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.662009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.662137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.662168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.662318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.662345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.662497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.662525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.662700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.662727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.662900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.662929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.663123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.663153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.663346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.663376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.663510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.663536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.663711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.663738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.663887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.663913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.664067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.664094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.664246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.664273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.664428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.664456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.664599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.664627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.664833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.664863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.665060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.665088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.665257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.665284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.665409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.665434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.665588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.665618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.665756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.665783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.665905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.665931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.666091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.666121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.666282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.666308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.666459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.666486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.666628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.666656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.666829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.666856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.667016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.667044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.667216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.667247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.667428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.667455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.667603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.667630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.667782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.667810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.667963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.667992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.668139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.668165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.668322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.668364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.668494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.105 [2024-07-15 10:06:16.668519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.105 qpair failed and we were unable to recover it. 00:33:00.105 [2024-07-15 10:06:16.668693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.668720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.668891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.668919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.669075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.669102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.669279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.669324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.669455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.669484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.669674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.669701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.669894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.669922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.670120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.670150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.670322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.670349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.670491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.670518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.670726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.670752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.670904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.670932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.671089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.671116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.671305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.671332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.671485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.671512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.671660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.671688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.671860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.671899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.672036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.672063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.672194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.672228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.672401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.672428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.672574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.672601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.672780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.672808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.672965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.672993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.673167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.673194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.673346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.673373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.673531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.673558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.673709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.673736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.673883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.673911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.674137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.674164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.674350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.674377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.674516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.674545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.674747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.674789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.674990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.675017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.675145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.675182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.675339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.675366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.675517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.675544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.675693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.675721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.675893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.675923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.676120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.676147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.676321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.676348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.676498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.106 [2024-07-15 10:06:16.676525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.106 qpair failed and we were unable to recover it. 00:33:00.106 [2024-07-15 10:06:16.676670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.676698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.676853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.676887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.677042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.677070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.677250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.677277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.677436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.677463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.677611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.677638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.677792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.677818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.678012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.678043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.678205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.678235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.678402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.678429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.678580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.678631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.678866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.678910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.679108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.679135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.679303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.679332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.679502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.679528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.679655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.679682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.679829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.679856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.680007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.680035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.680207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.680234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.680420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.680454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.680621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.680651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.680851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.680883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.681030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.681075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.681205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.681234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.681407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.681433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.681561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.681589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.681740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.681782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.681956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.681985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.682136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.682163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.682309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.682353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.682522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.682549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.682677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.682704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.682901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.682931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.683106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.683134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.683322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.683352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.683509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.683538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.683702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.683729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.683928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.683958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.684148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.684178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.684320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.684347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.107 [2024-07-15 10:06:16.684536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.107 [2024-07-15 10:06:16.684566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.107 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.684748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.684777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.684936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.684964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.685161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.685191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.685424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.685453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.685619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.685646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.685874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.685926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.686118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.686148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.686315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.686342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.686531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.686560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.686746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.686776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.686913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.686939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.687111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.687156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.687298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.687328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.687498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.687525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.687651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.687678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.687852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.687885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.688104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.688131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.688324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.688354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.688516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.688545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.688743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.688774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.688910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.688939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.689135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.689162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.689311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.689339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.689454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.689480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.689634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.689661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.689782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.689809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.690039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.690070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.690257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.690287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.690453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.690481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.690635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.690662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.690889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.690933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.691097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.691124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.691243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.691283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.691428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.691458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.691626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.691652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.691768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.691809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.691966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.691996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.692170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.692197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.692360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.692389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.692544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.108 [2024-07-15 10:06:16.692574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.108 qpair failed and we were unable to recover it. 00:33:00.108 [2024-07-15 10:06:16.692771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.692798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.692992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.693023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.693178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.693208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.693398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.693425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.693594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.693625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.693817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.693847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.693987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.694014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.694162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.694208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.694442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.694471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.694676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.694703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.694873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.694925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.695039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.695065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.695235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.695262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.695390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.695420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.695599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.695626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.695749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.695774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.695917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.695963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.696095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.696124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.696268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.696295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.696448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.696474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.696642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.696673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.696819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.696846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.696984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.697010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.697160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.697187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.697334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.697361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.697517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.697547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.697701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.697731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.697896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.697924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.698076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.698104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.698218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.698243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.698352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.698379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.698568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.698597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.698768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.698798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.698964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.698992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.109 [2024-07-15 10:06:16.699153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.109 [2024-07-15 10:06:16.699182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.109 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.699338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.699368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.699563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.699590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.699761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.699790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.699918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.699948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.700121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.700148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.700294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.700320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.700493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.700523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.700713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.700740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.700888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.700918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.701053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.701083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.701252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.701280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.701453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.701483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.701634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.701668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.701805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.701833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.702012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.702040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.702200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.702230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.702408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.702435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.702606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.702636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.702841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.702868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.702990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.703016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.703133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.703160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.703364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.703394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.703565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.703592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.703767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.703797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.703950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.703981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.704177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.704204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.704372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.704401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.704571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.704600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.704771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.704798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.704934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.704961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.705157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.705186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.705355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.705384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.705501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.705544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.705703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.705732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.705925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.705952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.706104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.706148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.706314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.706343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.706514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.706540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.706702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.706731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.706921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.706973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.110 [2024-07-15 10:06:16.707147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.110 [2024-07-15 10:06:16.707174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.110 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.707334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.707364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.707494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.707524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.707696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.707724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.707897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.707924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.708088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.708118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.708282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.708309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.708470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.708500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.708663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.708693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.708856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.708944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.709151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.709182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.709306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.709334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.709529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.709556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.709700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.709730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.709906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.709937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.710110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.710138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.710259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.710301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.710457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.710487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.710654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.710681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.710843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.710873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.711034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.711065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.711234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.711261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.711424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.711454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.711590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.711620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.711761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.711788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.711935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.711978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.712134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.712164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.712373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.712400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.712570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.712599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.712764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.712793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.712981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.713008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.713172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.713202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.713356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.713385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.713546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.713573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.713763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.713793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.713963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.713991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.714117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.714144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.714261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.714286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.714442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.714469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.714618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.714645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.714789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.714836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.111 qpair failed and we were unable to recover it. 00:33:00.111 [2024-07-15 10:06:16.715015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.111 [2024-07-15 10:06:16.715046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.715205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.715233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.715398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.715430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.715627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.715654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.715805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.715832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.715980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.716007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.716125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.716151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.716309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.716336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.716458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.716486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.716667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.716697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.716837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.716864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.717074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.717104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.717270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.717300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.717474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.717501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.717694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.717724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.717906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.717937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.718100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.718128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.718252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.718279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.718426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.718453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.718593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.718620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.718784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.718814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.718970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.719001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.719164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.719191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.719356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.719385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.719548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.719578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.719712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.719738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.719892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.719937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.720098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.720128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.720285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.720312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.720433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.720477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.720641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.720670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.720841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.720868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.721063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.721091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.721239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.721265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.721378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.721405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.721577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.721604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.721748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.721779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.721957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.721985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.722096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.722122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.722324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.722354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.722547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.112 [2024-07-15 10:06:16.722578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.112 qpair failed and we were unable to recover it. 00:33:00.112 [2024-07-15 10:06:16.722745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.722774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.722909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.722939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.723082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.723109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.723234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.723262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.723408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.723434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.723582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.723609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.723803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.723832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.724002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.724033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.724174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.724201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.724389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.724418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.724573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.724603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.724806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.724832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.724985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.725013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.725140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.725167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.725319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.725346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.725518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.725549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.725706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.725736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.725909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.725937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.726078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.726105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.726280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.726310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.726500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.726527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.726693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.726723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.726857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.726893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.727049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.727076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.727223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.727267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.727433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.727463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.727610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.727641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.727803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.727833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.728029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.728060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.728227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.728254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.728406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.728433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.728584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.728610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.728758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.728784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.728978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.729008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.729148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.113 [2024-07-15 10:06:16.729178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.113 qpair failed and we were unable to recover it. 00:33:00.113 [2024-07-15 10:06:16.729371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.729398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.729557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.729586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.729742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.729771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.729941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.729968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.730159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.730188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.730347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.730377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.730547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.730573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.730689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.730735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.730901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.730932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.731082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.731109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.731258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.731302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.731441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.731471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.731636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.731663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.731855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.731892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.732059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.732086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.732258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.732285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.732460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.732490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.732654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.732684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.732841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.732868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.733047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.733077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.733254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.733280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.733399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.733425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.733572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.733615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.733802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.733831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.734015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.734043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.734208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.734238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.734379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.734408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.734545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.734572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.734767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.734797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.734963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.734991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.735163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.735190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.735316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.735346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.735509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.735545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.735715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.735744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.735907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.735938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.736113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.736140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.736316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.736343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.736507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.736537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.736675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.736704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.736886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.736913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.114 [2024-07-15 10:06:16.737097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.114 [2024-07-15 10:06:16.737126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.114 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.737278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.737307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.737443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.737470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.737644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.737689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.737846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.737894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.738082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.738109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.738260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.738292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.738423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.738453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.738647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.738674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.738862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.738902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.739065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.739095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.739228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.739256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.739451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.739481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.739646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.739676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.739844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.739871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.739998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.740039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.740193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.740220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.740363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.740390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.740540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.740566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.740710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.740741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.740896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.740924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.741093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.741123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.741251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.741280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.741449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.741476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.741628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.741655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.741782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.741809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.741961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.741988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.742151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.742181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.742343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.742373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.742529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.742556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.742669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.742695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.742893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.742923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.743086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.743112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.743279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.743309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.743463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.743492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.743666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.743692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.743885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.743916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.744058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.744087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.744260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.744287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.744444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.744473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.744603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.744634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.115 qpair failed and we were unable to recover it. 00:33:00.115 [2024-07-15 10:06:16.744801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.115 [2024-07-15 10:06:16.744831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.744995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.745022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.745173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.745200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.745353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.745379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.745533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.745560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.745754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.745783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.745950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.745978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.746136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.746165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.746353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.746380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.746527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.746554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.746719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.746748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.746912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.746942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.747110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.747137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.747284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.747329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.747493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.747522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.747691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.747727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.747891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.747936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.748065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.748093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.748231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.748257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.748387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.748418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.748535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.748561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.748709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.748736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.748930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.748960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.749110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.749140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.749310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.749336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.749534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.749563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.749696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.749726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.749859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.749892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.750043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.750086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.750237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.750266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.750436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.750462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.750570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.750612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.750775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.750803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.750983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.751010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.751159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.751186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.751301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.751328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.751502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.751528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.751671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.751700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.751823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.751853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.752036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.752063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.752219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.116 [2024-07-15 10:06:16.752248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.116 qpair failed and we were unable to recover it. 00:33:00.116 [2024-07-15 10:06:16.752398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.752427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.752588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.752614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.752785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.752814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.752969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.752999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.753149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.753175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.753295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.753325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.753475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.753501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.753641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.753667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.753831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.753861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.754036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.754065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.754231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.754257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.754423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.754452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.754575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.754605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.754810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.754836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.754985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.755012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.755171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.755200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.755366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.755392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.755546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.755575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.755725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.755754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.755923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.755950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.756091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.756144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.756303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.756331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.756488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.756514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.756659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.756703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.756863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.756899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.757077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.757103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.757233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.757277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.757463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.757492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.757657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.757683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.757836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.757865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.758045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.758074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.758275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.758301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.758442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.758471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.758611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.758640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.758775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.758801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.758953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.758997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.759191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.759220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.759386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.759412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.759599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.759628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.759778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.759807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.117 qpair failed and we were unable to recover it. 00:33:00.117 [2024-07-15 10:06:16.759971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.117 [2024-07-15 10:06:16.759998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.760115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.760141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.760297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.760326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.760518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.760544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.760678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.760707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.760861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.760912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.761042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.761072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.761242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.761269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.761468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.761497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.761640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.761668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.761786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.761812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.761976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.762007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.762191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.762221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.762366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.762393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.762545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.762572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.762724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.762750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.762882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.762909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.763064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.763093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.763258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.763287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.763460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.763486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.763648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.763677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.763885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.763912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.764057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.764084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.764251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.764281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.764447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.764477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.764613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.764641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.764790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.764816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.765015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.765044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.765219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.765246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.765400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.765427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.765559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.765586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.765709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.765735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.765920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.765950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.766110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.766150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.766327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.118 [2024-07-15 10:06:16.766355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.118 qpair failed and we were unable to recover it. 00:33:00.118 [2024-07-15 10:06:16.766545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.766574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.766745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.766776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.766952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.766979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.767134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.767168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.767325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.767355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.767519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.767547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.767712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.767741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.767873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.767909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.768075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.768101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.768226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.768277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.768443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.768471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.768635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.768661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.768814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.768840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.769029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.769059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.769195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.769221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.769334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.769361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.769549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.769575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.769721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.769748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.769900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.769927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.770075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.770118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.770254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.770281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.770435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.770461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.770610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.770652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.770798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.770825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.770999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.771027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.771188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.771216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.771390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.771416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.771538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.771581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.771714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.771743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.771905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.771932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.772119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.772148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.772301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.772330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.772524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.772550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.772716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.772745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.772910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.772941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.773085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.773112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.773264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.773290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.773435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.773465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.773627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.773656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.773823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.773855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.774038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.119 [2024-07-15 10:06:16.774064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.119 qpair failed and we were unable to recover it. 00:33:00.119 [2024-07-15 10:06:16.774267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.774324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.774508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.774537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.774727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.774753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.774907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.774937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.775099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.775129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.775317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.775346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.775495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.775521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.775644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.775671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.775825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.775852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.776048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.776078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.776221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.776247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.776362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.776388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.776574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.776603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.776764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.776793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.776993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.777020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.777154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.777185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.777313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.777342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.777481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.777510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.777672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.777699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.777852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.777904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.778063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.778092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.778277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.778306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.778447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.778473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.778661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.778689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.778865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.778898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.779042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.779069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.779256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.779282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.779407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.779448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.779608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.779637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.779765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.779794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.779960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.779988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.780105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.780148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.780335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.780364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.780497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.780526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.780719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.780745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.780911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.780941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.781100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.781129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.781289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.781317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.781516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.781542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.781705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.781738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.120 [2024-07-15 10:06:16.781905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.120 [2024-07-15 10:06:16.781935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.120 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.782065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.782095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.782265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.782291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.782419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.782463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.782624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.782653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.782841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.782867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.783028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.783055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.783216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.783245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.783437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.783467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.783617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.783646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.783793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.783819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.783968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.783995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.784169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.784197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.784390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.784419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.784567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.784593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.784782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.784811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.784979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.785009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.785134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.785163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.785327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.785353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.785469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.785511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.785699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.785728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.785892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.785923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.786114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.786140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.786340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.786369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.786685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.786744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.786930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.786960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.787128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.787158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.787324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.787353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.787611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.787662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.787855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.787889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.788042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.788068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.788213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.788239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.788409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.788469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.788656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.788686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.788853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.788887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.789086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.789115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.789367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.789419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.789591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.789620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.789785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.789812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.789960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.789988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.790138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.790182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.121 qpair failed and we were unable to recover it. 00:33:00.121 [2024-07-15 10:06:16.790366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.121 [2024-07-15 10:06:16.790395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.790528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.790555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.790706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.790749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.790891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.790922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.791085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.791114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.791276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.791302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.791466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.791495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.791652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.791681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.791868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.791904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.792069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.792096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.792279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.792308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.792605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.792668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.792825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.792854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.793039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.793065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.793210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.793253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.793450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.793479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.793643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.793672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.793832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.793859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.794081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.794111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.794343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.794396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.794557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.794586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.794754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.794780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.794949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.794980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.795207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.795258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.795419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.795448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.795621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.795647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.795776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.795806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.795958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.796001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.796139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.796168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.796356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.796382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.796574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.796603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.796740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.796769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.796926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.796956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.797099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.797125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.797313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.797342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.122 qpair failed and we were unable to recover it. 00:33:00.122 [2024-07-15 10:06:16.797469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.122 [2024-07-15 10:06:16.797499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.797664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.797693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.797830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.797856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.798011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.798037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.798165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.798194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.798361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.798390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.798533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.798559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.798705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.798731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.798905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.798935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.799099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.799125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.799298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.799324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.799515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.799544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.799697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.799726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.799864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.799901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.800104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.800130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.800318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.800346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.800620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.800671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.800826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.800855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.801015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.801045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.801214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.801256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.801513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.801564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.801746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.801775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.801962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.801989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.802151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.802180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.802418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.802470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.802608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.802637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.802803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.802830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.802962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.802988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.803142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.803168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.803294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.803335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.803526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.803553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.803711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.803741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.803912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.803942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.804114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.804143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.804301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.804327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.804475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.804502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.804646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.804672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.804834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.804863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.805005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.805032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.805176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.805203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.805493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.123 [2024-07-15 10:06:16.805545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.123 qpair failed and we were unable to recover it. 00:33:00.123 [2024-07-15 10:06:16.805708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.805737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.805900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.805927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.806092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.806120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.806349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.806399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.806539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.806568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.806742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.806768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.806917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.806962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.807179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.807241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.807426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.807454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.807624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.807650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.807840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.807870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.808064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.808093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.808253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.808280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.808397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.808423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.808563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.808605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.808804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.808831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.808958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.808985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.809132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.809159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.809277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.809307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.809469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.809498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.809656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.809685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.809890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.809916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.810077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.810104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.810289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.810318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.810479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.810508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.810676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.810702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.810830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.810871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.811057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.811085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.811233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.811259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.811409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.811435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.811550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.811594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.811757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.811787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.811956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.811987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.812136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.812162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.812301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.812344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.812539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.812589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.812740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.812769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.812906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.812933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.813056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.813083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.813252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.813281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.124 [2024-07-15 10:06:16.813445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.124 [2024-07-15 10:06:16.813474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.124 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.813633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.813659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.813852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.813888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.814023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.814053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.814192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.814221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.814384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.814417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.814537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.814579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.814740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.814769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.814903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.814934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.815107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.815133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.815320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.815349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.815527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.815580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.815741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.815770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.815916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.815944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.816072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.816098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.816241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.816267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.816430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.816458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.816607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.816633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.816781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.816807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.816979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.817009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.817143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.817173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.817338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.817364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.817528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.817559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.817723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.817753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.817910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.817940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.818106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.818132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.818282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.818308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.818428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.818455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.818639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.818667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.818837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.818863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.819036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.819066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.819254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.819284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.819433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.819462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.819659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.819685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.819852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.819889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.820046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.820074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.820209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.820238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.820408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.820433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.820603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.820629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.820769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.820798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.820960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.820991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.125 [2024-07-15 10:06:16.821131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.125 [2024-07-15 10:06:16.821158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.125 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.821306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.821333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.821439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.821466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.821627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.821656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.821827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.821854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.821985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.822015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.822125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.822151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.822349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.822378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.822547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.822574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.822731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.822760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.822921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.822950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.823117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.823146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.823289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.823315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.823454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.823480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.823655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.823684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.823832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.823862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.824064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.824090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.824282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.824311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.824497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.824526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.824670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.824699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.824870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.824905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.825056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.825082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.825236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.825279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.825398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.825427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.825592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.825618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.825744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.825770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.825917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.825944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.826085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.826128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.826299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.826326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.826487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.826516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.826645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.826674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.826832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.826861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.827037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.827064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.827262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.827291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.827485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.827535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.827723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.827753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.126 qpair failed and we were unable to recover it. 00:33:00.126 [2024-07-15 10:06:16.827941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.126 [2024-07-15 10:06:16.827968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.828162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.828190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.828450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.828502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.828663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.828692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.828829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.828855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.828977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.829004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.829131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.829173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.829351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.829380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.829526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.829552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.829671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.829697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.829841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.829870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.830068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.830095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.830247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.830273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.830392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.830436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.830599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.830625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.830753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.830779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.830972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.830999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.831157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.831187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.831423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.831477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.831670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.831699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.831843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.831869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.832025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.832051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.832261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.832287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.832461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.832488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.832670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.832697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.832888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.832917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.833054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.833083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.833222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.833252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.833441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.833468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.833625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.833654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.833839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.833868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.834020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.834049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.834247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.834274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.834424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.834451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.834589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.834615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.834783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.834812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.834983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.835011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.835129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.835159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.835314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.835340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.835549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.835575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.835714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.127 [2024-07-15 10:06:16.835740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.127 qpair failed and we were unable to recover it. 00:33:00.127 [2024-07-15 10:06:16.835902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.835931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.836127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.836153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.836275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.836301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.836421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.836448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.836651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.836680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.836841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.836870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.837043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.837073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.837244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.837272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.837464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.837493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.837655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.837685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.837890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.837919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.838094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.838121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.838310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.838340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.838547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.838609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.838788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.838817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.838995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.839022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.839192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.839218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.839355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.839384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.839534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.839564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.839730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.839756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.839885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.839929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.840095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.840124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.840281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.840310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.840482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.840508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.840641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.840682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.840810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.840839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.841022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.841049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.841197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.841223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.841390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.841420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.841608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.841637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.841798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.841827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.842033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.842060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.842205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.842234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.842428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.842457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.842636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.842665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.842855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.842889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.843055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.843084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.843336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.843390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.843586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.843615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.843819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.843845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.844000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.128 [2024-07-15 10:06:16.844027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.128 qpair failed and we were unable to recover it. 00:33:00.128 [2024-07-15 10:06:16.844145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.844188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.844347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.844376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.844546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.844572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.844742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.844768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.844917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.844947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.845112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.845141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.845301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.845327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.845493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.845521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.845721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.845747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.845887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.845914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.846062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.846089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.846249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.846278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.846505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.846557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.846716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.846745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.846887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.846914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.847045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.847071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.847242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.847269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.847445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.847474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.847633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.847660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.847786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.847812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.847939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.847966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.848116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.848143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.848286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.848312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.848435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.848465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.848585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.848611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.848762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.848789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.848915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.848942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.849089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.849115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.849262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.849288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.849434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.849463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.849623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.849649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.849759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.849785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.849936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.849964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.850104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.850131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.850253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.850280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.850424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.850450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.850596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.850622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.850773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.850799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.850955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.850982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.851130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.851156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.129 qpair failed and we were unable to recover it. 00:33:00.129 [2024-07-15 10:06:16.851332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.129 [2024-07-15 10:06:16.851361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.851495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.851524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.851709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.851735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.851874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.851907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.852037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.852065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.852233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.852259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.852378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.852403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.852517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.852543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.852666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.852692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.852830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.852856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 [2024-07-15 10:06:16.853177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Read completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 Write completed with error (sct=0, sc=8) 00:33:00.130 starting I/O failed 00:33:00.130 [2024-07-15 10:06:16.853506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:00.130 [2024-07-15 10:06:16.853621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3a480 is same with the state(5) to be set 00:33:00.130 [2024-07-15 10:06:16.853800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.853850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.854029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.854059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.854180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.854207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.854334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.854361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.854510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.854538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.854686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.854715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.854873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.854924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.855075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.130 [2024-07-15 10:06:16.855101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.130 qpair failed and we were unable to recover it. 00:33:00.130 [2024-07-15 10:06:16.855248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.855274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.855389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.855415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.855531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.855557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.855697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.855725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.855896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.855941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.856071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.856103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.856263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.856288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.856404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.856430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.856576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.856601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.856745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.856773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.856969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.856996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.857118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.857144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.857288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.857314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.857437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.857479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.857642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.857671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.857797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.857826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.857965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.857992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.858116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.858142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.858268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.858295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.858473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.858499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.858619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.858645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.131 [2024-07-15 10:06:16.858797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.131 [2024-07-15 10:06:16.858826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.131 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.858988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.859016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.859136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.859162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.859329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.859355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.859495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.859521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.859665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.859691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.859807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.859832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.859992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.860019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.860140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.860183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.860335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.860362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.860512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.860539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.860664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.860690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.860848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.860874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.861004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.861030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.861142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.861185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.861326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.861354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.861502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.861530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.861639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.861665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.861861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.861926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.862058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.862085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.862257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.862283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.862417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.862446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.862583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.862611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.862740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.862768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.862931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.862959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.863112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.863159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.863294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.863322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.863511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.863555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.863683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.863712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.863884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.863931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.864082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.864108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.416 qpair failed and we were unable to recover it. 00:33:00.416 [2024-07-15 10:06:16.864263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.416 [2024-07-15 10:06:16.864290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.864405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.864431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.864588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.864618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.864808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.864838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.864987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.865014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.865158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.865184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.865329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.865356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.865493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.865521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.865720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.865748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.865902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.865946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.866069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.866095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.866219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.866245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.866392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.866419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.866533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.866559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.866698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.866727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.866927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.866955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.867083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.867109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.867271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.867300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.867432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.867462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.867681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.867710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.867885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.867911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.868031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.868061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.868234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.868260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.868555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.868606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.868765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.868794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.868979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.869006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.869151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.869180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.869309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.869338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.869526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.869555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.869717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.869746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.869943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.869971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.870095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.870122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.870271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.870297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.870420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.870448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.870573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.870599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.870753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.870779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.870933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.870961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.871118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.871144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.871286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.871313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.871465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.871492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.871632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.871659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.417 qpair failed and we were unable to recover it. 00:33:00.417 [2024-07-15 10:06:16.871784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.417 [2024-07-15 10:06:16.871810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.871932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.871959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.872104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.872130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.872293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.872319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.872491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.872517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.872631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.872657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.872788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.872814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.872966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.872997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.873143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.873185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.873350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.873376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.873543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.873569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.873709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.873736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.873859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.873891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.874015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.874042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.874160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.874186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.874332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.874358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.874481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.874507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.874651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.874677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.874819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.874845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.874971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.874998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.875153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.875180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.875337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.875377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.875537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.875584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.875784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.875832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.875979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.876007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.876179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.876223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.876389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.876419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.876609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.876640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.876805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.876834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.877013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.877040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.877208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.877237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.877409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.877451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.877590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.877619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.877782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.877811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.877985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.878012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.878159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.878202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.878411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.878440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.878631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.878659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.878850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.878888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.879059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.879086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.879230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.879256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.418 [2024-07-15 10:06:16.879391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.418 [2024-07-15 10:06:16.879420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.418 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.879574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.879602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.879768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.879794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.879958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.879985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.880098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.880123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.880248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.880274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.880401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.880427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.880578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.880635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.880797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.880825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.880966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.880995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.881164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.881208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.881368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.881397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.881566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.881613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.881796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.881824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.881976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.882003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.882121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.882147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.882382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.882410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.882573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.882602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.882736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.882764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.882926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.882953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.883098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.883124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.883268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.883297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.883457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.883486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.883662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.883688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.883814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.883840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.883995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.884023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.884140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.884167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.884339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.884365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.884529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.884559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.884717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.884746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.884898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.884942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.885067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.885093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.885213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.885239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.885362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.885388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.885532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.885567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.885731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.885760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.885902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.885945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.886058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.886084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.886259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.886286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.886425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.886451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.886607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.886636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.419 [2024-07-15 10:06:16.886831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.419 [2024-07-15 10:06:16.886860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.419 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.887036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.887063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.887210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.887236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.887354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.887380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.887502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.887528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.887675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.887703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.887830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.887859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.888039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.888065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.888175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.888201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.888318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.888344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.888497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.888523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.888661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.888690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.888848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.888883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.889034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.889060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.889191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.889217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.889364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.889390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.889530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.889556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.889752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.889780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.889977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.890004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.890125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.890152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.890369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.890401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.890566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.890594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.890762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.890787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.890935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.890961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.891104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.891134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.891295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.891325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.891459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.891487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.891613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.891642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.891784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.891813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.891980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.892006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.892181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.892207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.892355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.892381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.892545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.892575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.892740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.892769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.892967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.420 [2024-07-15 10:06:16.893007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.420 qpair failed and we were unable to recover it. 00:33:00.420 [2024-07-15 10:06:16.893165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.893205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.893407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.893445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.893588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.893614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.893785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.893814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.893990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.894017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.894191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.894216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.894385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.894411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.894578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.894604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.894719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.894746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.894890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.894917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.895101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.895127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.895245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.895271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.895517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.895569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.895734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.895763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.895895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.895940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.896072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.896098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.896221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.896247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.896418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.896444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.896639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.896668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.896852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.896888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.897056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.897082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.897229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.897255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.897397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.897422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.897572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.897598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.897775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.897803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.897969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.897996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.898136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.898166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.898313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.898338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.898487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.898513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.898652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.898678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.898825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.898850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.898999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.899025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.899199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.899225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.899396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.899422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.899571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.899597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.899741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.899767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.899940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.899967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.900113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.900140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.900318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.900347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.900490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.900516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.900669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.421 [2024-07-15 10:06:16.900695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.421 qpair failed and we were unable to recover it. 00:33:00.421 [2024-07-15 10:06:16.900815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.900841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.901004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.901031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.901143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.901169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.901319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.901345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.901488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.901514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.901652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.901678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.901804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.901831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.901945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.901972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.902118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.902144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.902265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.902292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.902412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.902438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.902568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.902594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.902740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.902770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.902920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.902947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.903065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.903093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.903274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.903303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.903467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.903494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.903667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.903693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.903853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.903886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.904030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.904056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.904177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.904203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.904344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.904370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.904513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.904539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.904664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.904691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.904817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.904844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.904975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.905001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.905171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.905211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.905386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.905422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.905606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.905633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.905782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.905811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.905966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.905999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.906126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.906152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.906325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.906352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.906465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.906501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.906652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.906678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.906826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.906854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.907010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.907049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.907209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.907237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.907394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.907421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.907572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.907618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.907867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.422 [2024-07-15 10:06:16.907936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.422 qpair failed and we were unable to recover it. 00:33:00.422 [2024-07-15 10:06:16.908083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.908112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.908258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.908287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.908464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.908508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.908683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.908730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.908873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.908922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.909048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.909075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.909198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.909224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.909445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.909474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.909670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.909725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.909857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.909903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.910045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.910071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.910237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.910265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.910405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.910434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.910620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.910649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.910785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.910814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.910977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.911005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.911119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.911145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.911266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.911292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.911403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.911430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.911572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.911601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.911792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.911821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.911992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.912020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.912152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.912179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.912318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.912344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.912518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.912544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.912712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.912746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.912905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.912950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.913068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.913094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.913217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.913243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.913367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.913393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.913537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.913566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.913719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.913747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.913898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.913942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.914063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.914090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.914238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.914265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.914391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.914434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.914598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.914627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.914812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.914841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.914991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.915018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.915192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.915233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.423 [2024-07-15 10:06:16.915456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.423 [2024-07-15 10:06:16.915511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.423 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.915694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.915746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.915931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.915960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.916109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.916136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.916286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.916313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.916573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.916625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.916809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.916839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.916997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.917024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.917174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.917201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.917349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.917376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.917497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.917523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.917643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.917669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.917858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.917895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.918064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.918091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.918243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.918269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.918393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.918419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.918540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.918566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.918688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.918714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.918861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.918899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.919071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.919097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.919244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.919270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.919386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.919412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.919531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.919557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.919679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.919705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.919844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.919870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.920021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.920047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.920163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.920211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.920404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.920430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.920556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.920582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.920738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.920764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.920907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.920935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.921057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.921083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.921263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.921290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.921437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.921463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.921584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.921611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.921737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.921763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.424 qpair failed and we were unable to recover it. 00:33:00.424 [2024-07-15 10:06:16.921882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.424 [2024-07-15 10:06:16.921909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.922085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.922111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.922227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.922254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.922410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.922436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.922562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.922588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.922728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.922754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.922888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.922915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.923058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.923085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.923206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.923232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.923353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.923379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.923501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.923528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.923644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.923670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.923849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.923882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.924001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.924027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.924153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.924179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.924289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.924315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.924490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.924516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.924666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.924696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.924846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.924872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.925005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.925031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.925175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.925201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.925350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.925376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.925549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.925575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.925723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.925749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.925899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.925926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.926065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.926091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.926224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.926253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.926443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.926470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.926585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.926612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.926739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.926765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.926907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.926934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.927082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.927108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.927280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.927309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.927496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.927522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.927666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.927692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.927816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.927842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.927994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.928020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.928170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.928196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.928348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.928374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.928486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.928513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.928658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.928684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.425 [2024-07-15 10:06:16.928836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.425 [2024-07-15 10:06:16.928862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.425 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.929023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.929049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.929202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.929229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.929403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.929429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.929550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.929576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.929697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.929723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.929874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.929908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.930079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.930105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.930227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.930253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.930402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.930429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.930608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.930635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.930781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.930807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.930952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.930979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.931100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.931127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.931269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.931295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.931436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.931462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.931584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.931610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.931735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.931768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.931940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.931967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.932087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.932113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.932258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.932284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.932429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.932456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.932579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.932605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.932723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.932749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.932898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.932925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.933072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.933098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.933252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.933278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.933433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.933459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.933602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.933628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.933741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.933767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.933904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.933931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.934110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.934136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.934255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.934282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.934401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.934427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.934579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.934605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.934755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.934781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.934932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.934960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.935078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.935104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.935254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.935281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.935397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.935423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.935561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.935588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.935760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.426 [2024-07-15 10:06:16.935787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.426 qpair failed and we were unable to recover it. 00:33:00.426 [2024-07-15 10:06:16.935931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.935958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.936099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.936125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.936251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.936281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.936396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.936423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.936593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.936620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.936739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.936767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.936917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.936944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.937066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.937092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.937248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.937274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.937424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.937450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.937595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.937621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.937766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.937792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.937940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.937968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.938116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.938142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.938291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.938317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.938492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.938518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.938673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.938729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.938892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.938939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.939089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.939117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.939296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.939324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.939502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.939554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.939757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.939802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.939947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.939976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.940118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.940144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.940289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.940315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.940467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.940493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.940669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.940698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.940828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.940857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.941016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.941042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.941201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.941227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.941377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.941403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.941579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.941608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.941751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.941779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.941904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.941932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.942079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.942105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.942227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.942253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.942401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.942427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.942559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.942588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.942726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.942755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.942928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.942956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.943069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.427 [2024-07-15 10:06:16.943095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.427 qpair failed and we were unable to recover it. 00:33:00.427 [2024-07-15 10:06:16.943235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.943261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.943409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.943435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.943637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.943686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.943852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.943895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.944043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.944071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.944193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.944220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.944401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.944446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.944611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.944658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.944841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.944868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.945008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.945037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.945196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.945224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.945397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.945444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.945649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.945699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.945850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.945888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.946037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.946067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.946260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.946305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.946505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.946533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.946680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.946708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.946839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.946866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.947061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.947106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.947251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.947296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.947485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.947512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.947662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.947692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.947845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.947874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.948054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.948083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.948271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.948300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.948491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.948520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.948679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.948707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.948885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.948912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.949079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.949105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.949254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.949280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.949425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.949452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.949613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.949642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.949802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.949831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.950028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.950056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.950193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.950219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.950369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.950395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.950540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.950566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.950712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.950740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.950864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.950919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.428 [2024-07-15 10:06:16.951095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.428 [2024-07-15 10:06:16.951121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.428 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.951334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.951359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.951508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.951535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.951658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.951684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.951801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.951826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.951975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.952002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.952154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.952180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.952320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.952345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.952502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.952544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.952700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.952729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.952900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.952927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.953066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.953092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.953209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.953235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.953346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.953371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.953516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.953541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.953676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.953704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.953866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.953918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.954080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.954114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.954241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.954269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.954437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.954489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.954653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.954697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.954842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.954869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.955003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.955030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.955230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.955275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.955419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.955464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.955621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.955653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.955794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.955823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.955992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.956019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.956179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.956208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.956375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.956404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.956571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.956600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.956755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.956784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.956936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.956963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.957114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.957141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.957410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.957439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.957630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.957659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.429 [2024-07-15 10:06:16.957819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.429 [2024-07-15 10:06:16.957848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.429 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.958011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.958041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.958201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.958234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.958357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.958385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.958550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.958601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.958755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.958782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.958935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.958966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.959137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.959182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.959347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.959376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.959513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.959541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.959682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.959711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.959863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.959902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.960094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.960121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.960265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.960294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.960442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.960484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.960639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.960667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.960829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.960855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.960984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.961011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.961156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.961182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.961329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.961359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.961552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.961581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.961750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.961779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.961942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.961970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.962114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.962140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.962284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.962310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.962478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.962507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.962667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.962695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.962821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.962850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.963023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.963050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.963195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.963220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.963366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.963392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.963536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.963565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.963731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.963759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.963914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.963961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.964135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.964164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.964344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.964372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.964535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.964563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.964717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.964745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.964915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.964942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.965061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.965087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.965254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.965284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.430 [2024-07-15 10:06:16.965448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.430 [2024-07-15 10:06:16.965477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.430 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.965646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.965675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.965816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.965841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.966000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.966027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.966146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.966173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.966341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.966370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.966554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.966582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.966744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.966773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.966933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.966961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.967109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.967135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.967277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.967303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.967449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.967475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.967648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.967677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.967832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.967860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.968007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.968034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.968179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.968223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.968380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.968409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.968641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.968671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.968834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.968863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.969040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.969066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.969201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.969230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.969390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.969419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.969578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.969606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.969790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.969818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.969991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.970017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.970178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.970207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.970457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.970505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.970689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.970718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.970858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.970895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.971069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.971095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.971245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.971271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.971414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.971439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.971585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.971614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.971744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.971774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.971955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.971982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.972129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.972155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.972271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.972297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.972451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.972477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.972635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.972664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.972828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.972857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.973002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.973028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.973172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.431 [2024-07-15 10:06:16.973198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.431 qpair failed and we were unable to recover it. 00:33:00.431 [2024-07-15 10:06:16.973310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.973336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.973450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.973476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.973614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.973642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.973779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.973809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.973981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.974008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.974155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.974181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.974337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.974363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.974477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.974503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.974669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.974697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.974848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.974886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.975031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.975057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.975195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.975220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.975346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.975372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.975546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.975575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.975735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.975763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.975959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.975986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.976103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.976130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.976278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.976304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.976475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.976501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.976628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.976657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.976819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.976848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.977021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.977048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.977169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.977195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.977346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.977387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.977580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.977609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.977773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.977802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.977942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.977968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.978092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.978118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.978247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.978273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.978509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.978563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.978726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.978756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.978910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.978955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.979104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.979129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.979283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.979326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.979482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.979510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.979669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.979698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.979821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.979850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.980031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.980058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.980185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.980228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.980384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.980413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.980577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.980605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.432 qpair failed and we were unable to recover it. 00:33:00.432 [2024-07-15 10:06:16.980742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.432 [2024-07-15 10:06:16.980770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.980950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.980977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.981104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.981130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.981277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.981305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.981490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.981519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.981661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.981686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.981899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.981926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.982081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.982107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.982299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.982328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.982462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.982488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.982605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.982631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.982771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.982800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.982972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.982998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.983117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.983144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.983306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.983335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.983497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.983523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.983714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.983742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.983943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.983969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.984144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.984170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.984338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.984367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.984497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.984526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.984696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.984722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.984924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.984954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.985074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.985102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.985264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.985290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.985456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.985485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.985648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.985678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.985867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.985900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.986039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.986066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.986211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.986242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.986410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.986436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.986589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.986615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.986758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.986784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.986938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.986965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.987105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.987131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.987239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.987265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.433 qpair failed and we were unable to recover it. 00:33:00.433 [2024-07-15 10:06:16.987423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.433 [2024-07-15 10:06:16.987449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.987608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.987636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.987823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.987852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.987998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.988024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.988162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.988204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.988394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.988423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.988571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.988597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.988739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.988781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.988945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.988975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.989144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.989170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.989300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.989333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.989496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.989525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.989688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.989714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.989855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.989889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.990001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.990027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.990173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.990199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.990364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.990393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.990554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.990582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.990773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.990798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.990978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.991005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.991126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.991169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.991316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.991342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.991497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.991540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.991702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.991731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.991902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.991929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.992078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.992105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.992245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.992271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.992462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.992488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.992627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.992653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.992815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.992843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.993035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.993062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.993221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.993250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.993380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.993408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.993567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.993593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.993784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.993813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.993956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.993983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.994143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.994169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.994358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.994387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.994558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.994587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.994717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.994744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.994898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.994924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.434 [2024-07-15 10:06:16.995044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.434 [2024-07-15 10:06:16.995071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.434 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.995250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.995277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.995401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.995430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.995592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.995620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.995788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.995814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.995968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.995995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.996140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.996183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.996340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.996366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.996516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.996559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.996714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.996742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.996937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.996967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.997119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.997161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.997316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.997346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.997538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.997564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.997700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.997728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.997858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.997909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.998097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.998123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.998292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.998321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.998506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.998534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.998698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.998727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.998844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.998872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.999031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.999057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.999231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.999257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.999438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.999467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.999632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.999661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:16.999827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:16.999853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.000005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.000032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.000180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.000206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.000343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.000369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.000532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.000561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.000684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.000713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.000854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.000888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.001030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.001056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.001193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.001222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.001367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.001393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.001530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.001555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.001719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.001748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.001920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.001951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.002094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.002120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.002270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.002299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.002466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.002492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.002643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.435 [2024-07-15 10:06:17.002669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.435 qpair failed and we were unable to recover it. 00:33:00.435 [2024-07-15 10:06:17.002788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.002814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.002942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.002970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.003108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.003152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.003315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.003346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.003489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.003515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.003634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.003659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.003828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.003857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.004006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.004032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.004204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.004246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.004414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.004443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.004583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.004609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.004756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.004782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.004934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.004961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.005080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.005106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.005298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.005327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.005509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.005537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.005701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.005727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.005919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.005949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.006140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.006168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.006358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.006384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.006553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.006582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.006735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.006764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.006910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.006937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.007091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.007135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.007291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.007320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.007457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.007483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.007652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.007680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.007834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.007862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.008042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.008069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.008220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.008246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.008431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.008459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.008654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.008680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.008872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.008908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.009037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.009066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.009224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.009250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.009371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.009397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.009549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.009579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.009752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.009778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.009976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.010006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.010145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.010174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.010338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.436 [2024-07-15 10:06:17.010364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.436 qpair failed and we were unable to recover it. 00:33:00.436 [2024-07-15 10:06:17.010552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.010581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.010744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.010773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.010936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.010962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.011075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.011117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.011244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.011273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.011444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.011470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.011586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.011612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.011808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.011837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.012015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.012042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.012198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.012242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.012369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.012399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.012550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.012576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.012746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.012772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.012924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.012954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.013124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.013150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.013298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.013339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.013500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.013529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.013698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.013724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.013846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.013871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.014031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.014057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.014172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.014198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.014344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.014370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.014520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.014550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.014668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.014694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.014805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.014831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.015020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.015047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.015220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.015246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.015368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.015394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.015520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.015546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.015693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.015719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.015885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.015915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.016038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.016066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.016257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.016283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.016442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.016483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.016637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.016666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.016805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.016831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.437 qpair failed and we were unable to recover it. 00:33:00.437 [2024-07-15 10:06:17.016981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.437 [2024-07-15 10:06:17.017022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.017173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.017202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.017381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.017408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.017573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.017602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.017764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.017794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.017961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.017989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.018155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.018185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.018347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.018377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.018541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.018568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.018732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.018761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.018945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.018974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.019126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.019154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.019355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.019384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.019546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.019581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.019727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.019754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.019892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.019949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.020118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.020148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.020308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.020335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.020485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.020512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.020658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.020684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.020806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.020832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.021006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.021037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.021204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.021233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.021400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.021426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.021590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.021619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.021756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.021785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.021979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.022007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.022182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.022211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.022367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.022396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.022539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.022565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.022709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.022735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.022901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.022930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.023093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.023119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.023237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.023263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.023375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.023402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.023546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.023572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.023729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.023759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.023923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.023952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.024110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.024136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.024289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.024315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.024471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.438 [2024-07-15 10:06:17.024497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.438 qpair failed and we were unable to recover it. 00:33:00.438 [2024-07-15 10:06:17.024675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.024701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.024930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.024959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.025147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.025176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.025353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.025380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.025529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.025572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.025768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.025794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.025941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.025969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.026159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.026188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.026317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.026345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.026486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.026512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.026714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.026743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.026906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.026936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.027109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.027135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.027327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.027360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.027490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.027519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.027715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.027741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.027931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.027960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.028194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.028223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.028382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.028408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.028556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.028598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.028770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.028796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.029019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.029046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.029217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.029246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.029406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.029435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.029604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.029630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.029785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.029811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.029957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.029985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.030159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.030185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.030354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.030382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.030512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.030540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.030712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.030738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.030856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.030893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.031066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.031095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.031259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.031286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.031406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.031432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.031633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.031662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.031825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.031850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.032049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.032078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.032240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.032269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.032443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.032470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.032592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.032639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.439 qpair failed and we were unable to recover it. 00:33:00.439 [2024-07-15 10:06:17.032825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.439 [2024-07-15 10:06:17.032854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.033038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.033065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.033185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.033227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.033387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.033416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.033554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.033580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.033772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.033801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.033955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.033986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.034154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.034180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.034330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.034357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.034504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.034530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.034680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.034705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.034885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.034930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.035117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.035147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.035328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.035355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.035478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.035506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.035697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.035726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.035866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.035901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.036056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.036100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.036286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.036316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.036456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.036484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.036671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.036701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.036873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.036905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.037077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.037104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.037254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.037298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.037487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.037516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.037683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.037710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.037900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.037936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.038101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.038131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.038322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.038349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.038503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.038530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.038681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.038726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.038858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.038891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.039041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.039084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.039245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.039275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.039441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.039468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.039586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.039631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.039812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.039842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.040013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.040041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.040235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.040265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.040423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.040453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.040615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.040643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.440 qpair failed and we were unable to recover it. 00:33:00.440 [2024-07-15 10:06:17.040762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.440 [2024-07-15 10:06:17.040789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.041006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.041034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.041209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.041236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.041428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.041458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.041584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.041615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.041797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.041824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.041986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.042018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.042180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.042210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.042405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.042432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.042601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.042631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.042797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.042827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.043028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.043056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.043217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.043244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.043442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.043471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.043627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.043654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.043819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.043849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.044044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.044074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.044258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.044285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.044482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.044511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.044705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.044735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.044868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.044902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.045080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.045110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.045285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.045316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.045483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.045510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.045679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.045709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.045873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.045917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.046090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.046116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.046244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.046271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.046422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.046449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.046626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.046652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.046823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.046853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.047021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.047051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.047217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.047245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.047362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.047389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.047567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.047598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.047788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.047818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.441 qpair failed and we were unable to recover it. 00:33:00.441 [2024-07-15 10:06:17.047988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.441 [2024-07-15 10:06:17.048016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.048164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.048191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.048335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.048362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.048478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.048522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.048647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.048677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.048848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.048882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.049051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.049081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.049239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.049269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.049465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.049492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.049659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.049689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.049875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.049920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.050093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.050120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.050269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.050296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.050456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.050485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.050682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.050709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.050872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.050911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.051101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.051130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.051301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.051330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.051521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.051551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.051681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.051711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.051882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.051910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.052080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.052110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.052274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.052305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.052470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.052497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.052646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.052691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.052892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.052923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.053069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.053097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.053268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.053298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.053482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.053512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.053680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.053711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.053871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.053918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.054109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.054136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.054294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.054321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.054446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.054473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.054624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.054652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.054791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.054817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.055053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.055083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.055280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.055307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.055448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.055475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.055639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.055670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.442 qpair failed and we were unable to recover it. 00:33:00.442 [2024-07-15 10:06:17.055858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.442 [2024-07-15 10:06:17.055897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.056070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.056098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.056269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.056299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.056436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.056465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.056658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.056685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.056819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.056849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.057054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.057085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.057288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.057315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.057448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.057479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.057610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.057640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.057832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.057859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.058036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.058067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.058227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.058257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.058451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.058478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.058622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.058652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.058780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.058811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.058997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.059025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.059164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.059196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.059356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.059386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.059548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.059576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.059775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.059805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.060044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.060075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.060228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.060256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.060404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.060446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.060582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.060614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.060809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.060838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.060986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.061013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.061181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.061212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.061379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.061406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.061554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.061585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.061764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.061809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.061973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.062002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.062235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.062265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.062449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.062478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.062645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.062672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.062824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.062852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.063031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.063062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.063233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.063261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.063436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.063466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.063632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.063663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.063896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.443 [2024-07-15 10:06:17.063925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.443 qpair failed and we were unable to recover it. 00:33:00.443 [2024-07-15 10:06:17.064123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.064153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.064317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.064347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.064520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.064548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.064716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.064746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.064907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.064937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.065082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.065110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.065285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.065330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.065488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.065519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.065718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.065746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.065919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.065950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.066139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.066169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.066366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.066393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.066587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.066617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.066756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.066786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.066958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.066986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.067152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.067183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.067347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.067377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.067567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.067594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.067797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.067828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.067999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.068029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.068226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.068253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.068444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.068475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.068643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.068672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.068808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.068853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.069046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.069074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.069305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.069335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.069532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.069559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.069755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.069785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.069963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.069994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.070115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.070142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.070258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.070286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.070505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.070532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.070716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.070743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.070912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.070941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.071125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.071153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.071340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.071365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.071514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.071540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.071685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.071727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.071919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.071945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.072118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.072146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.444 [2024-07-15 10:06:17.072310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.444 [2024-07-15 10:06:17.072338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.444 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.072503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.072529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.072667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.072697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.072853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.072889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.073072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.073100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.073273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.073311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.073514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.073558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.073760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.073789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.073946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.073974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.074168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.074216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.074409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.074436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.074576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.074614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.074832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.074873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.075086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.075123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.075274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.075311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.075516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.075544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.075675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.075704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.075848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.075874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.076034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.076062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.076216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.076252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.076390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.076420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.076584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.076614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.076804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.076830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.076955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.076992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.077126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.077153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.077279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.077307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.077456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.077482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.077634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.077662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.077809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.077842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.078024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.078051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.078179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.078231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.078408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.078437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.078614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.078641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.078792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.078819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.078999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.079026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.079160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.079188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.079340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.079372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.079530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.079556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.079715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.079742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.079922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.079954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.080117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.080145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.445 qpair failed and we were unable to recover it. 00:33:00.445 [2024-07-15 10:06:17.080320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.445 [2024-07-15 10:06:17.080347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.080510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.080545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.080702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.080738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.080893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.080931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.081080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.081108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.081242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.081269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.081425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.081453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.081628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.081655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.081775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.081803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.081929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.081967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.082121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.082147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.082265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.082292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.082428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.082455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.082579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.082607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.082788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.082849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.083021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.083051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.083170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.083199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.083406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.083437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.083650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.083694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.083831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.083862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.084016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.084043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.084189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.084216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.084364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.084391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.084547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.084573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.084747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.084799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.084950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.084978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.085087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.085114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.085261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.085287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.085443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.085471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.085617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.085644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.085828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.085855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.085985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.086012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.086132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.086159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.086300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.086326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.086441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.086484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.086616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.446 [2024-07-15 10:06:17.086645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.446 qpair failed and we were unable to recover it. 00:33:00.446 [2024-07-15 10:06:17.086831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.086860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.087041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.087068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.087213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.087239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.087382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.087410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.087569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.087598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.087782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.087816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.087966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.087992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.088172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.088198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.088425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.088486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.088646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.088675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.088860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.088899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.089065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.089091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.089322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.089347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.089520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.089547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.089691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.089718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.089891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.089918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.090035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.090061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.090186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.090228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.090390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.090417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.090537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.090564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.090687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.090713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.090833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.090859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.091040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.091067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.091291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.091317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.091469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.091495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.091642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.091670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.091822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.091849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.092015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.092042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.092191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.092219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.092345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.092371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.092494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.092520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.092656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.092681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.092796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.092826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.092952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.092979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.093101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.093128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.093269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.093296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.093452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.093479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.093650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.093676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.093817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.093844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.093974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.094001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.094143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.094169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.447 qpair failed and we were unable to recover it. 00:33:00.447 [2024-07-15 10:06:17.094295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.447 [2024-07-15 10:06:17.094322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.094469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.094495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.094607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.094634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.094754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.094781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.094950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.094977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.095143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.095170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.095342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.095368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.095516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.095542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.095707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.095733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.095906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.095933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.096056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.096082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.096204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.096230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.096351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.096377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.096494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.096521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.096680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.096706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.096884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.096910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.097053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.097079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.097217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.097244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.097402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.097428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.097576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.097602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.097742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.097769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.097887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.097914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.098063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.098089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.098200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.098226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.098369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.098398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.098564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.098590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.098737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.098765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.098884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.098911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.099061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.099087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.099239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.099265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.099412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.099438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.099583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.099609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.099782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.099813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.099964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.099992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.100178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.100205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.100320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.100348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.100465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.100492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.100600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.100626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.100783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.100809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.100938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.100965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.101089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.101117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.448 [2024-07-15 10:06:17.101292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.448 [2024-07-15 10:06:17.101321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.448 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.101460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.101486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.101637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.101663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.101817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.101843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.101974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.102001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.102134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.102160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.102318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.102344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.102515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.102541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.102701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.102727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.102870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.102904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.103050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.103077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.103191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.103217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.103329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.103355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.103496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.103522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.103647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.103673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.103798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.103824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.103976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.104003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.104121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.104149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.104310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.104336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.104495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.104522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.104694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.104720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.104863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.104975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.105138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.105166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.105283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.105310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.105448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.105474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.105655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.105681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.105829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.105856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.106009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.106035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.106184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.106210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.106353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.106379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.106500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.106526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.106670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.106696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.106825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.106851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.107001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.107028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.107167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.107193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.107316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.107342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.107492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.107518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.107654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.107680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.107801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.107827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.107988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.108015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.108156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.108182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.108330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.449 [2024-07-15 10:06:17.108356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.449 qpair failed and we were unable to recover it. 00:33:00.449 [2024-07-15 10:06:17.108507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.108533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.108681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.108707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.108861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.108895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.109035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.109062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.109215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.109240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.109394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.109420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.109592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.109618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.109759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.109785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.109933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.109960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.110104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.110129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.110274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.110300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.110473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.110499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.110641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.110666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.110792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.110818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.110968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.110995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.111116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.111142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.111284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.111309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.111457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.111487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.111636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.111662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.111776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.111802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.111926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.111953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.112073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.112099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.112238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.112264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.112406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.112432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.112604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.112630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.112756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.112782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.112932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.112959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.113115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.113140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.113353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.113379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.113527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.113553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.113696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.113722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.113836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.113861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.113997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.114023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.114144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.114170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.114294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.114321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.450 [2024-07-15 10:06:17.114444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.450 [2024-07-15 10:06:17.114470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.450 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.114644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.114670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.114816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.114841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.115006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.115032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.115159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.115185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.115325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.115351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.115522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.115548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.115668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.115693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.115862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.115904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.116065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.116091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.116244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.116270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.116420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.116446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.116607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.116633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.116745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.116771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.116891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.116917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.117068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.117094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.117218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.117245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.117385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.117411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.117556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.117582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.117729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.117755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.117905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.117932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.118077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.118103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.118242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.118268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.118413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.118443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.118595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.118621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.118761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.118788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.118959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.118987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.119147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.119173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.119317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.119343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.119490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.119517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.119692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.119718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.119893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.119920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.120060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.120086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.120251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.120282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.120417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.120446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.120614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.120640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.120829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.120858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.121041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.121070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.121229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.121255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.121447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.121475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.121601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.121630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.451 [2024-07-15 10:06:17.121790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.451 [2024-07-15 10:06:17.121815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.451 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.121963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.121990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.122106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.122132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.122344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.122371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.122512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.122542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.122708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.122734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.122882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.122909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.123052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.123078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.123269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.123298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.123444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.123475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.123628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.123654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.123799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.123826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.124004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.124030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.124223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.124252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.124371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.124400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.124566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.124591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.124754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.124783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.124944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.124973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.125117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.125143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.125282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.125325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.125458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.125487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.125649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.125675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.125867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.125905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.126045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.126074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.126236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.126262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.126403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.126432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.126592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.126621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.126787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.126813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.126940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.126967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.127085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.127111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.127281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.127307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.127446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.127475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.127662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.127691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.127852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.127884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.128033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.128059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.128196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.128225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.128417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.128443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.128600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.128625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.128832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.128861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.129035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.129061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.129207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.129236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.129421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.452 [2024-07-15 10:06:17.129450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.452 qpair failed and we were unable to recover it. 00:33:00.452 [2024-07-15 10:06:17.129644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.129670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.129799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.129825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.129972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.130000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.130175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.130201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.130368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.130397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.130530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.130559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.130731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.130757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.130909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.130937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.131087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.131117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.131293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.131319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.131481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.131509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.131665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.131693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.131890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.131917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.132036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.132062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.132201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.132230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.132398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.132424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.132602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.132631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.132765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.132794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.132957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.132984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.133093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.133119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.133321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.133347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.133496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.133522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.133691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.133719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.133885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.133915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.134074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.134099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.134268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.134297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.134484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.134512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.134659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.134685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.134843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.134868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.134989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.135015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.135155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.135181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.135334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.135363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.135522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.135552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.135721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.135747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.135916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.135946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.136102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.136135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.136298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.136325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.136474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.136501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.136622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.136648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.136817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.136843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.137020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.453 [2024-07-15 10:06:17.137046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.453 qpair failed and we were unable to recover it. 00:33:00.453 [2024-07-15 10:06:17.137212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.137241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.137373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.137399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.137591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.137620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.137781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.137810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.137973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.138000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.138116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.138142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.138290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.138319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.138508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.138534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.138680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.138710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.138838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.138867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.139040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.139067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.139232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.139261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.139423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.139449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.139621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.139648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.139768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.139794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.139965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.140007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.140154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.140180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.140291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.140317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.140492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.140521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.140670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.140696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.140815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.140842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.140996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.141023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.141170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.141196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.141367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.141393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.141588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.141617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.141764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.141790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.141960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.141987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.142157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.142186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.142354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.142380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.142494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.142521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.142675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.142703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.142853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.142886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.143066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.143092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.143299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.143327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.143463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.143488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.143635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.454 [2024-07-15 10:06:17.143681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.454 qpair failed and we were unable to recover it. 00:33:00.454 [2024-07-15 10:06:17.143888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.143915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.144052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.144078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.144223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.144249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.144392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.144417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.144567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.144593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.144784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.144812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.144964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.144993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.145193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.145219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.145397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.145426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.145547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.145575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.145725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.145751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.145904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.145946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.146103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.146131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.146305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.146331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.146483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.146510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.146660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.146686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.146836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.146862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.147036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.147065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.147224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.147253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.147414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.147440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.147586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.147612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.147760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.147786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.147966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.147992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.148137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.148178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.148339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.148367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.148534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.148560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.148705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.148753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.148893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.148922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.149058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.149084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.149235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.149260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.149412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.149438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.149579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.149605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.149727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.149753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.149894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.149921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.150038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.150064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.150236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.150262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.150419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.150448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.150612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.150639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.150752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.150778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.150948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.150977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.151143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.151169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.455 qpair failed and we were unable to recover it. 00:33:00.455 [2024-07-15 10:06:17.151328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.455 [2024-07-15 10:06:17.151357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.151517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.151546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.151720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.151746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.151894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.151921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.152109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.152138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.152336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.152362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.152526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.152555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.152710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.152738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.152881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.152908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.153034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.153060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.153212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.153240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.153378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.153403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.153524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.153550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.153731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.153760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.153936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.153963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.154130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.154159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.154325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.154354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.154493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.154519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.154657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.154683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.154845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.154873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.155034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.155060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.155171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.155197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.155393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.155422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.155586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.155612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.155750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.155794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.155994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.156021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.156191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.156221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.156393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.156422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.156609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.156637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.156828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.156855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.157024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.157053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.157181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.157209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.157360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.157386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.157566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.157592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.157750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.157779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.157919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.157947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.158098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.158141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.158258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.158287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.158424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.158450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.158622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.158665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.158798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.158827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.456 [2024-07-15 10:06:17.158995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.456 [2024-07-15 10:06:17.159023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.456 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.159177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.159204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.159344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.159385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.159551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.159577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.159698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.159741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.159908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.159938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.160102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.160128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.160286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.160314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.160497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.160527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.160714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.160740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.160890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.160916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.161039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.161065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.161180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.161206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.161381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.161408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.161575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.161604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.161742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.161768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.161919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.161946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.162095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.162137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.162331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.162358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.162529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.162555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.162702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.162744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.162903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.162931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.163119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.163148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.163310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.163340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.163503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.163529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.163726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.163755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.163918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.163948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.164107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.164133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.164324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.164353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.164481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.164512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.164682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.164708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.164902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.164932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.165068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.165099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.165281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.165307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.165438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.165464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.165631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.165660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.165835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.165861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.166048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.166077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.166204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.166233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.166420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.166446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.166654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.166683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.166842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.457 [2024-07-15 10:06:17.166871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.457 qpair failed and we were unable to recover it. 00:33:00.457 [2024-07-15 10:06:17.167047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.167073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.167192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.167218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.167331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.167357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.167506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.167534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.167706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.167734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.167890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.167920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.168094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.168120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.168259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.168300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.168497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.168523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.168664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.168689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.168838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.168864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.169008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.169039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.169192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.169218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.169379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.169408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.169568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.169597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.169740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.169766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.169915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.169964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.170122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.170152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.170327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.170353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.170511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.170540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.170694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.170723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.170917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.170944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.171143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.171172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.171345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.171374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.171546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.171572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.171700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.171743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.171905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.171935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.172067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.172092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.172264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.172308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.172482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.172511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.172645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.172671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.172830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.172872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.173021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.173050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.458 [2024-07-15 10:06:17.173197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.458 [2024-07-15 10:06:17.173222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.458 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.173396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.173425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.173624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.173651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.173821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.173848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.174027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.174057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.174186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.174215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.174416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.174442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.174571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.174597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.174765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.174808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.174946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.174973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.175108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.175150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.175337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.175366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.175536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.175562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.175720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.175749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.175937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.175966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.176118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.176144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.732 [2024-07-15 10:06:17.176316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.732 [2024-07-15 10:06:17.176342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.732 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.176496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.176526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.176667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.176693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.176869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.176903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.177085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.177115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.177320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.177347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.177468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.177494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.177647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.177672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.177790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.177816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.178005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.178034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.178173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.178201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.178368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.178394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.178569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.178598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.178766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.178795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.178983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.179010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.179180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.179209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.179373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.179402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.179572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.179599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.179720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.179764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.179945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.179972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.180115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.180141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.180336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.180365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.180532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.180560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.180730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.180757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.180951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.180981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.181141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.181170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.181369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.181395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.181536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.181567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.181701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.181730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.181899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.181926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.182089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.182123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.182307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.182336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.182506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.182533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.182735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.182764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.182927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.182957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.183121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.183147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.183339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.183368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.183534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.183562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.183691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.183718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.183895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.183939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.184129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.184158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.184325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.184351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.184472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.184514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.184678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.184707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.184867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.184904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.185089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.185116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.185289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.185318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.185485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.185511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.185647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.185689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.185825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.185854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.186019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.186046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.186187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.186231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.186393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.186422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.186617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.186644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.186841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.186870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.187038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.187067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.187228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.187254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.733 [2024-07-15 10:06:17.187390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.733 [2024-07-15 10:06:17.187432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.733 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.187598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.187627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.187790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.187817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.187978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.188008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.188142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.188172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.188306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.188332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.188523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.188552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.188748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.188774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.188918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.188944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.189066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.189108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.189233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.189262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.189434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.189460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.189651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.189680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.189834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.189863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.190034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.190065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.190255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.190284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.190473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.190502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.190639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.190665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.190781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.190808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.191018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.191047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.191205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.191232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.191387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.191413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.191556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.191582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.191722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.191748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.191909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.191939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.192095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.192124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.192256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.192282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.192402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.192429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.192618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.192645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.192764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.192790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.192940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.192983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.193177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.193203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.193375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.193401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.193569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.193598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.193740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.193768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.193944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.193971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.194164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.194193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.194346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.194375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.194528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.194554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.194739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.194767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.194903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.194933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.195130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.195161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.195337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.195366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.195503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.195531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.195703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.195729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.195866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.195901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.196060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.196089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.196259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.196285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.196400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.196426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.196643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.196669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.196847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.196873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.197053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.197081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.197211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.197240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.197386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.197412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.197553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.197595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.197785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.197814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.197978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.198005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.198134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.198161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.198309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.198352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.198516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.198542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.198728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.198757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.198890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.198919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.199112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.199138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.734 [2024-07-15 10:06:17.199264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.734 [2024-07-15 10:06:17.199290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.734 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.199441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.199467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.199608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.199634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.199755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.199797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.199926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.199955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.200119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.200145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.200268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.200295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.200462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.200491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.200660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.200686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.200883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.200913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.201044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.201073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.201242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.201268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.201433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.201461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.201625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.201654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.201818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.201844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.202020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.202050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.202220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.202248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.202405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.202431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.202593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.202622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.202759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.202792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.202966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.202993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.203147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.203188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.203353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.203379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.203553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.203580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.203750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.203779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.203948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.203978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.204144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.204170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.204332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.204361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.204545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.204574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.204715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.204741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.204894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.204937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.205068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.205096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.205292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.205318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.205443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.205470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.205611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.205637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.205778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.205803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.205931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.205976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.206161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.206190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.206353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.206379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.206547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.206576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.206739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.206768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.206896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.206923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.207068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.207094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.207278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.207308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.207458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.207484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.207678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.207707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.207862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.207903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.208071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.208097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.208285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.208314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.208503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.208531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.208719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.208745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.208911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.208940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.209074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.209102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.209276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.209302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.209476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.209506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.209702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.209728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.209871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.209912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.210077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.210106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.210298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.210324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.210473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.210499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.210614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.210657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.210842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.210871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.211043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.211069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.211243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.211270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.211411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.211453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.735 [2024-07-15 10:06:17.211592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.735 [2024-07-15 10:06:17.211617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.735 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.211770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.211796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.211959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.211989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.212151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.212177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.212345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.212374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.212533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.212562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.212731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.212757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.212907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.212950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.213146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.213175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.213342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.213368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.213489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.213532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.213690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.213719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.213896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.213923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.214042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.214086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.214257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.214283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.214422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.214448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.214607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.214633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.214796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.214824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.215003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.215029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.215172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.215198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.215310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.215336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.215503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.215529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.215702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.215737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.215922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.215952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.216118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.216145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.216310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.216339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.216500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.216526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.216672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.216698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.216888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.216918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.217110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.217139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.217283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.217309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.217422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.217448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.217588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.217617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.217754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.217781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.217932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.217975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.218162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.218191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.218350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.218376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.218521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.218563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.218697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.218726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.218898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.218925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.219047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.219089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.219241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.219270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.219468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.219494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.219689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.219717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.219851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.219888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.220047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.220073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.220196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.220222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.220334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.220359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.220503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.220529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.220692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.220721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.220886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.220915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.221060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.221086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.221201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.221227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.221435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.221461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.221640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.221666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.221826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.221855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.222018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.222048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.222209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.222235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.222376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.222420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.222557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.222586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.222758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.222784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.222933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.222960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.223125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.223154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.223324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.223350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.223515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.223544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.223701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.223730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.223896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.223923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.224114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.224143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.224283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.224311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.224507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.224533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.224701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.224730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.224858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.224891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.225085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.225111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.225274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.225302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.225466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.225494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.225666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.225692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.225843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.225869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.226067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.226096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.226263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.736 [2024-07-15 10:06:17.226291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.736 qpair failed and we were unable to recover it. 00:33:00.736 [2024-07-15 10:06:17.226436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.226480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.226639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.226668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.226796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.226822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.226935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.226962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.227126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.227155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.227323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.227349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.227536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.227565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.227720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.227749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.227895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.227922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.228077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.228103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.228306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.228334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.228465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.228495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.228620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.228646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.228843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.228871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.229033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.229059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.229229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.229257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.229443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.229472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.229636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.229662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.229818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.229844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.230008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.230035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.230182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.230208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.230374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.230403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.230567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.230596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.230738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.230764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.230889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.230916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.231103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.231129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.231269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.231295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.231422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.231464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.231621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.231649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.231838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.231864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.232032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.232061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.232233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.232261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.232400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.232427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.232621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.232651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.232812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.232841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.233014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.233041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.233189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.233234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.233393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.233422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.233557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.233583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.233777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.233805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.233973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.234000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.234174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.234201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.234335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.234364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.234525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.234554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.234692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.234718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.234826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.234852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.235004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.235031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.235203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.235230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.235396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.235424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.235563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.235592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.235795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.235821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.235928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.235972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.236141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.236168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.236312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.236338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.236494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.236522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.236650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.236679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.236815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.236843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.237040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.237070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.237283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.237310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.237454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.237480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.237636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.237662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.237799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.237828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.238030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.238057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.238216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.238245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.238406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.238437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.238580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.238606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.238730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.238755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.238958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.238988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.239164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.239190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.737 [2024-07-15 10:06:17.239389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.737 [2024-07-15 10:06:17.239418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.737 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.239576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.239605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.239768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.239794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.239951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.239981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.240147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.240176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.240345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.240371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.240563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.240592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.240717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.240745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.240887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.240913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.241033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.241059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.241207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.241237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.241383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.241409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.241552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.241593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.241756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.241785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.241963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.241990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.242190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.242220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.242341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.242372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.242511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.242539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.242712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.242738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.242901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.242932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.243116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.243145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.243304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.243330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.243476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.243523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.243709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.243738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.243913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.243940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.244069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.244113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.244283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.244309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.244485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.244512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.244655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.244684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.244848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.244884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.245026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.245052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.245172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.245198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.245345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.245375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.245573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.245599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.245794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.245823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.245986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.246015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.246210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.246236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.246378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.246407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.246577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.246607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.246745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.246771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.246890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.246918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.247373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.247404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.247609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.247637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.247810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.247839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.247993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.248021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.248186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.248212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.248403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.248432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.248572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.248600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.248772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.248797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.248919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.248947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.249089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.249116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.249227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.249257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.249383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.249409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.249574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.249600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.249777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.249803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.249979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.250009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.250171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.250200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.250354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.250380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.250550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.250579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.250741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.250770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.250955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.250982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.251177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.251207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.251339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.251368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.251564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.251590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.251726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.251756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.251895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.251925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.252097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.252123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.252271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.252316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.252500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.252529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.252722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.252748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.252937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.252967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.253107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.253136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.253304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.253330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.253479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.253505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.253650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.253676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.253826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.253853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.253999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.254028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.254156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.254185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.738 qpair failed and we were unable to recover it. 00:33:00.738 [2024-07-15 10:06:17.254326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.738 [2024-07-15 10:06:17.254356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.254468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.254494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.254689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.254718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.254888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.254915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.255083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.255112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.255295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.255324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.255510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.255537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.255696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.255725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.255891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.255919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.256043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.256069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.256241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.256270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.256426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.256455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.256624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.256650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.256797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.256838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.256995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.257023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.257171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.257198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.257356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.257385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.257539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.257569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.257760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.257786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.257952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.257982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.258117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.258146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.258303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.258330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.258493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.258522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.258676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.258705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.258862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.258903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.259027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.259072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.259204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.259233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.259374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.259400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.259582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.259608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.259779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.259808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.259973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.260000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.260113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.260139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.260319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.260348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.260491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.260517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.260668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.260694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.260803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.260829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.260957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.260985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.261150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.261179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.261310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.261339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.261505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.261531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.261670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.261713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.261840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.261873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.262028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.262055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.262194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.262237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.262428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.262457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.262652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.262678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.262845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.262874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.263047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.263076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.263214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.263241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.263377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.263404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.263602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.263631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.263803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.263830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.264004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.264034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.264163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.264192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.264335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.264361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.264517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.264543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.264673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.264702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.264902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.264929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.265093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.265122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.265278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.265307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.265474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.265501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.265649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.265692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.265857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.265905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.266073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.266100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.266223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.266251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.266406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.266433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.266613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.266639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.266837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.266866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.267041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.267075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.267244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.267270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.267389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.267415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.267556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.267582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.267737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.267765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.267908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.267936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.268077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.268119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.268318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.739 [2024-07-15 10:06:17.268344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.739 qpair failed and we were unable to recover it. 00:33:00.739 [2024-07-15 10:06:17.268452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.268495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.268682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.268710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.268859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.268892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.269016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.269042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.269182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.269211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.269380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.269405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.269548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.269592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.269751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.269780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.269950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.269977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.270099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.270125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.270274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.270300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.270447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.270473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.270598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.270624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.270766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.270792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.270973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.271000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.271151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.271194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.271326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.271355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.271522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.271549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.271738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.271766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.271893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.271923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.272090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.272116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.272280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.272309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.272493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.272522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.272685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.272711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.272819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.272845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.272984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.273013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.273153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.273179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.273328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.273370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.273542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.273568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.273676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.273702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.273825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.273851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.273997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.274024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.274145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.274171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.274283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.274313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.274505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.274534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.274676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.274701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.274854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.274906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.275068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.275098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.275294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.275320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.275496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.275525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.275681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.275710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.275911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.275939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.276096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.276125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.276323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.276350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.276462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.276488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.276627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.276669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.276844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.276873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.277065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.277092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.277252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.277281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.277436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.277465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.277611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.277637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.277746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.277772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.277950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.277980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.278143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.278169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.278289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.278315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.278464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.278490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.278609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.278635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.278776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.278818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.279005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.279035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.279210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.279236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.279384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.279414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.279559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.279601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.279789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.279816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.279949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.279976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.280096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.280122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.280230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.280256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.280440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.280469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.280633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.280662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.280856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.280888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.281049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.281078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.740 qpair failed and we were unable to recover it. 00:33:00.740 [2024-07-15 10:06:17.281242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.740 [2024-07-15 10:06:17.281269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.281445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.281471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.281610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.281638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.281804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.281833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.282011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.282039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.282154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.282195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.282381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.282410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.282553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.282579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.282719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.282745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.282917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.282948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.283094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.283120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.283314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.283343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.283530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.283559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.283751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.283777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.283937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.283981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.284138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.284165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.284366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.284398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.284571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.284597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.284761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.284790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.284934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.284965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.285129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.285156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.285301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.285342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.285510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.285539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.285678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.285705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.285846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.285895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.286084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.286113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.286267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.286297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.286460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.286486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.286632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.286659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.286859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.286897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.287038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.287064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.287206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.287236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.287421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.287486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.287657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.287683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.287871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.287909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.288078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.288104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.288249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.288275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.288465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.288494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.288634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.288663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.288831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.288857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.288986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.289014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.289187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.289216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.289384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.289410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.289560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.289587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.289740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.289785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.289963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.289990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.290177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.290206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.290407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.290473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.290671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.290697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.290843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.290872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.291069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.291099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.291268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.291294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.291463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.291492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.291619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.291648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.291780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.291807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.291948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.291991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.292166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.292192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.292313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.292339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.292461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.292487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.292659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.292688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.292860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.292893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.293082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.293111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.293286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.293315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.293470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.293498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.293669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.293694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.293820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.293846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.294030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.294057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.294180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.294207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.294349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.294375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.294516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.294542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.294679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.294706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.294833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.294859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.295012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.295042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.295213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.295239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.295354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.295382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.295533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.295562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.295733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.295760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.295885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.295928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.296106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.296132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.741 qpair failed and we were unable to recover it. 00:33:00.741 [2024-07-15 10:06:17.296310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.741 [2024-07-15 10:06:17.296335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.296512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.296542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.296670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.296699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.296861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.296901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.297057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.297086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.297231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.297263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.297458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.297484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.297628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.297657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.297815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.297844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.298018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.298045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.298191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.298233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.298434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.298482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.298630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.298656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.298803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.298829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.299006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.299047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.299230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.299260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.299413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.299441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.299631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.299660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.299891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.299935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.300112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.300139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.300327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.300367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.300517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.300547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.300789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.300842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.301028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.301054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.301203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.301230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.301395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.301424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.301649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.301697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.301856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.302050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.302246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.302277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.302420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.302449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.302635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.302664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.302802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.302831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.303015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.303041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.303180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.303206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.303407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.303437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.303685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.303735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.303932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.303959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.304086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.304113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.304284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.304313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.304558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.304626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.304810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.304837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.304978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.305006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.305131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.305157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.305344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.305385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.305547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.305577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.305765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.305795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.305951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.305979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.306126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.306153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.306302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.306331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.306494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.306523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.306710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.306739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.306900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.306943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.307085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.307112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.307310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.307336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.307489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.307532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.307759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.307788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.307964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.307992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.308137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.308175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.308331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.308401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.308559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.308590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.308756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.308791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.308970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.308997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.309148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.309201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.309355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.309385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.309597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.309626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.309812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.309841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.310041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.310068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.310207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.310243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.310390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.310420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.310613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.310643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.310806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.310835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.742 qpair failed and we were unable to recover it. 00:33:00.742 [2024-07-15 10:06:17.310996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.742 [2024-07-15 10:06:17.311022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.311143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.311181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.311356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.311386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.311557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.311586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.311727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.311770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.311947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.311975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.312095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.312122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.312305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.312332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.312507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.312536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.312700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.312729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.312986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.313014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.313183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.313210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.313415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.313444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.313665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.313695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.313885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.313930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.314089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.314116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.314267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.314295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.314476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.314507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.314717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.314744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.314928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.314954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.315106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.315132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.315303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.315332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.315610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.315663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.315816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.315844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.316048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.316089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.316404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.316476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.316647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.316677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.316834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.316863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.317077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.317134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.317319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.317370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.317567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.317611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.317759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.317791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.317975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.318020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.318220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.318264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.318445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.318489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.318661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.318688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.318837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.318883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.319051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.319101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.319263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.319307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.319506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.319551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.319723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.319749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.319944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.319990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.320194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.320237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.320414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.320461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.320653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.320696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.320846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.320889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.321080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.321124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.321335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.321378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.321575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.321620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.321800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.321827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.321988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.322033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.322207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.322258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.322421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.322465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.322613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.322657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.322812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.322839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.323005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.323049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.323255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.323301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.323508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.323551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.323689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.323717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.323892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.323920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.324043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.324071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.324267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.324311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.324624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.324680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.324834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.324861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.325046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.325072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.325235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.325278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.325446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.325489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.325640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.325692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.325835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.325873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.326021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.326070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.326219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.326273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.326504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.326530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.326722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.326748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.326893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.326920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.327086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.327130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.327308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.327356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.327531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.743 [2024-07-15 10:06:17.327560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.743 qpair failed and we were unable to recover it. 00:33:00.743 [2024-07-15 10:06:17.327699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.327727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.327940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.327968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.328164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.328211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.328417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.328460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.328614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.328641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.328816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.328843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.328987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.329015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.329184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.329229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.329409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.329454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.329604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.329631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.329777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.329812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.330002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.330049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.330208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.330255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.330457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.330501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.330682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.330709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.330895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.330939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.331080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.331124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.331314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.331358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.331532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.331587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.331763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.331803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.332012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.332044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.332182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.332213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.332385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.332427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.332741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.332802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.332973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.333000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.333126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.333168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.333370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.333399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.333552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.333581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.333807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.333836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.334037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.334064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.334237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.334273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.334435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.334463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.334663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.334711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.334859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.334902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.335044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.335070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.335213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.335239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.335382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.335410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.335568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.335597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.335736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.335765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.335926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.335953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.336093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.336119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.336269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.336297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.336439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.336467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.336640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.336669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.336847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.336892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.337079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.337105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.337273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.337303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.337427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.337453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.337600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.337650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.337779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.337805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.337955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.337996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.338127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.338156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.338333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.338360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.338491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.338537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.338673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.338704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.338884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.338912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.339035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.339062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.339185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.339211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.339417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.339447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.339632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.339685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.339840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.339881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.340006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.340034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.340162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.340216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.340399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.340447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.340622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.340649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.340828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.340855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.341035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.341074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.341228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.341255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.341373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.341418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.341614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.341644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.341811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.341838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.744 [2024-07-15 10:06:17.341976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.744 [2024-07-15 10:06:17.342016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.744 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.342188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.342219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.342449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.342478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.342653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.342700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.342899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.342944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.343095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.343120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.343275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.343323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.343485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.343531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.343704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.343733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.343946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.343973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.344097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.344123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.344282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.344308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.344476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.344505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.344674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.344716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.344856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.344892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.345017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.345044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.345164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.345192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.345350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.345376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.345496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.345541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.345733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.345763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.345914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.345941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.346065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.346091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.346228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.346257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.346453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.346481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.346633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.346661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.346800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.346829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.346987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.347013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.347165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.347190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.347354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.347381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.347536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.347604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.347738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.347767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.347893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.347938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.348052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.348078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.348208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.348234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.348388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.348414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.348555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.348581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.348698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.348724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.348841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.348867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.349001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.349027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.349148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.349206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.349362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.349391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.349523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.349563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.349714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.349742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.349917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.349944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.350090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.350119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.350246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.350273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.350385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.350424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.350539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.350565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.350711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.350738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.350894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.350921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.351043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.351069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.351202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.351231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.351407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.351436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.351596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.351625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.351782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.351811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.351980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.352008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.352135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.352162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.352319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.352346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.352463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.352489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.352650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.352679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.352828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.352857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.353018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.353045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.353158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.353192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.353318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.353344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.353488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.353514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.353710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.353739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.353864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.353905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.354043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.354070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.354187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.354213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.354358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.354387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.354538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.354565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.354718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.354747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.354926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.354953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.355119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.355145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.355338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.355364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.355501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.355527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.355690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.355716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.355863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.355909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.745 [2024-07-15 10:06:17.356029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.745 [2024-07-15 10:06:17.356056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.745 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.356171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.356220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.356398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.356424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.356556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.356581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.356692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.356718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.356884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.356911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.357042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.357068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.357212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.357241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.357424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.357449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.357575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.357601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.357758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.357784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.357908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.357935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.358057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.358084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.358256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.358285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.358445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.358471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.358617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.358643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.358783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.358815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.358945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.358972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.359101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.359127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.359246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.359276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.359392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.359418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.359547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.359573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.359699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.359725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.359841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.359867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.359992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.360018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.360141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.360167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.360311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.360338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.360507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.360533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.360679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.360704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.360826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.360863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.360992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.361019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.361136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.361162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.361307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.361333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.361488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.361515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.361683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.361709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.361830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.361856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.362030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.362057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.362184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.362211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.362335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.362361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.362503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.362530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.362658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.362684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.362850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.362896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.363043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.363068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.363210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.363240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.363426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.363452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.363578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.363605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.363770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.363796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.363938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.363965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.364099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.364126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.364247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.364272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.364409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.364435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.364583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.364611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.364729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.364755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.364881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.364908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.365035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.365061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.365237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.365263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.365416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.365442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.365550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.365576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.365698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.365736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.365852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.365899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.366055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.366082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.366257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.366283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.366407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.366433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.366577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.366604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.366756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.366782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.366916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.366943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.367062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.367088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.367252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.367278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.367446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.367472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.367589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.367615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.367741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.367767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.368535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.368566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.368736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.368763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.368936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.368964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.369099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.369127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.369264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.369290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.369412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.369439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.746 qpair failed and we were unable to recover it. 00:33:00.746 [2024-07-15 10:06:17.369584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.746 [2024-07-15 10:06:17.369610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.369735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.369762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.369904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.369931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.370054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.370081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.370232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.370258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.370411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.370437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.370568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.370594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.370713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.370739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.370910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.370937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.371056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.371082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.371223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.371252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.371379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.371405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.371553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.371590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.371762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.371788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.371899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.371927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.372050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.372076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.372188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.372214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.372381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.372407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.372560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.372586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.372718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.372745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.372919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.372947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.373075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.373101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.373226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.373260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.373385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.373412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.373547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.373573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.373681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.373708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.373886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.373913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.374031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.374057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.374184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.374210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.374339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.374364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.374509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.374535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.374655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.374682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.374810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.374836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.375652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.375682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.375899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.375926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.376050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.376076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.376232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.376257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.376391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.376416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.376532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.376559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.376680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.376706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.376860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.376905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.377024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.377050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.377168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.377194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.377309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.377336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.377497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.377523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.378487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.378517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.378686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.378713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.378840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.378867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.379025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.379052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.379203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.379229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.379381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.379408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.379558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.379589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.379748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.379774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.379944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.379971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.380138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.380165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.380324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.380350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.381096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.381126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.381266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.381294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.381473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.381506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.381668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.381694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.381884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.381911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.382041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.382069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.382229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.382256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.382376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.382402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.382517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.382543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.382672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.382698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.382856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.382898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.383073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.383099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.383221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.383247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.383389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.383415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.383538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.383564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.383713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.383739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.383904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.383932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.747 [2024-07-15 10:06:17.384096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.747 [2024-07-15 10:06:17.384123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.747 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.384285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.384321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.384445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.384473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.384636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.384663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.384818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.384844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.385011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.385042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.385188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.385214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.385346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.385372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.385520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.385547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.385673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.385700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.385872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.385911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.386041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.386068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.386184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.386216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.386368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.386394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.386546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.386572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.386698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.386724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.386895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.386922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.387043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.387070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.387186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.387212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.387326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.387358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.387514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.387541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.387661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.387687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.387851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.387895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.388017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.388044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.388218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.388254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.388372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.388398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.388525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.388551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.388705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.388731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.388851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.388895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.389021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.389047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.389216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.389253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.389373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.389399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.389515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.389550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.389720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.389746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.389893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.389920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.390049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.390075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.390220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.390246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.390368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.390394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.390566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.390592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.390734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.390760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.390914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.390942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.391121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.391148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.391273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.391300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.391450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.391476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.391599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.391625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.391772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.391798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.391959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.391990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.392109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.392135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.392259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.392285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.392428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.392454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.392594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.392620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.392765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.392793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.392929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.392957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.393107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.393133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.393301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.393336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.393450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.393476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.393595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.393621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.393767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.393794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.393945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.393973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.394699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.394729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.394896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.394924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.395677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.395723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.395944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.395972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.396105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.396131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.396307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.396333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.396512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.396538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.396673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.396701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.396888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.396916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.397031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.397057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.397235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.397261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.397452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.397487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.397623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.397649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.397794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.397820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.397946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.397977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.398106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.398132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.398267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.398293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.398475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.398501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.398647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.398673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.398831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.398858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.748 qpair failed and we were unable to recover it. 00:33:00.748 [2024-07-15 10:06:17.399047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.748 [2024-07-15 10:06:17.399076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.399195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.399221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.399395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.399422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.399569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.399595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.399755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.399784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.399963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.399990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.400145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.400172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.400298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.400324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.400469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.400496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.400619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.400646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.400799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.400825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.400997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.401024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.401149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.401187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.401340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.401376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.401554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.401580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.401708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.401734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.401851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.401897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.402015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.402042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.402188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.402214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.402336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.402362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.402506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.402532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.402679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.402711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.402835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.402862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.402991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.403018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.403147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.403184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.403348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.403374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.403524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.403550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.403701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.403727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.403839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.403865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.404572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.404602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.404735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.404772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.405518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.405551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.405761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.405791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.405960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.405987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.406115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.406141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.406269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.406299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.406443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.406470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.406587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.406614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.406746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.406773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.406920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.406948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.407077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.407103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.407222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.407250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.407393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.407420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.407534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.407561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.407700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.407726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.407860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.407906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.408033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.408060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.408183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.408209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.408351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.408377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.408504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.408531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.408675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.408702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.408841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.408867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.409008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.409035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.409177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.409203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.409359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.409386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.409499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.409525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.409671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.409709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.409851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.409895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.410020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.410046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.410156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.410192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.410349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.410376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.410520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.410546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.410699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.410729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.410893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.410921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.411080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.411106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.411288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.411314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.411460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.411497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.411638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.411664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.411789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.411815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.411975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.412001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.412122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.412148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.412300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.412326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.749 qpair failed and we were unable to recover it. 00:33:00.749 [2024-07-15 10:06:17.412435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.749 [2024-07-15 10:06:17.412461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.412607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.412632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.412758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.412784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.412942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.412969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.413102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.413140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.413306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.413335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.413477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.413504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.413662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.413689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.413840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.413888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.414017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.414044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.414194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.414221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.414392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.414438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.414584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.414611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.414753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.414780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.414932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.414960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.415107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.415134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.415326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.415358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.415545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.415596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.415735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.415765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.415930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.415957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.416078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.416105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.416286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.416312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.416434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.416478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.416672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.416719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.416886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.416931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.417060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.417086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.417229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.417255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.417373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.417399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.417549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.417575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.417735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.417762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.417904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.417931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.418073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.418113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.418288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.418317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.418451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.418479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.418604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.418631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.418783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.418809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.418967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.418995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.419144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.419177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.419346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.419373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.419519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.419546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.419668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.419698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.419834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.419861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.420009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.420036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.420179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.420208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.420383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.420411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.420581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.420607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.420735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.420761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.420936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.420962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.421087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.421113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.421270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.421299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.421482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.421511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.421691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.421717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.421837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.421863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.421998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.422024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.422145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.422187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.422339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.422368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.422541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.422566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.422724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.422750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.422931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.422958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.423104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.423129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.423265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.423293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.423442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.423468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.423622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.423647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.423792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.423817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.423960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.423987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.424106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.424132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.424248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.424274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.424402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.424428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.424571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.424597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.424770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.424795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.424925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.424951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.425115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.425140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.425322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.425351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.425547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.425576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.425715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.425740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.425857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.425906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.426084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.426110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.426277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.426305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.750 [2024-07-15 10:06:17.426442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.750 [2024-07-15 10:06:17.426467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.750 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.426598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.426624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.426746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.426771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.426920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.426946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.427094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.427120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.427281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.427307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.427455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.427480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.427625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.427654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.427812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.427838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.427989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.428015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.428142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.428186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.428412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.428440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.428575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.428600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.428726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.428752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.428905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.428932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.429052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.429078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.429231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.429257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.429412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.429438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.429548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.429573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.429705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.429731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.429850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.429884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.430042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.430068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.430218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.430244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.430422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.430451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.430614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.430639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.430756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.430782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.430907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.430933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.431048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.431074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.431229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.431255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.431467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.431515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.431684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.431709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.431835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.431861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.431992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.432018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.432144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.432169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.432294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.432320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.432468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.432493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.432612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.432638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.432787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.432814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.432947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.432974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.433100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.433126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.433256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.433282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.433447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.433475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.433654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.433679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.433857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.433900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.434020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.434045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.434259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.434287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.434469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.434498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.434637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.434662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.434835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.434861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.435020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.435047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.435176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.435201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.435374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.435400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.435530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.435556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.435687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.435712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.435826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.435852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.436015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.436042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.436211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.436253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.436393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.436418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.436568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.436594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.436740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.436765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.436910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.436936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.437087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.437114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.437275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.437301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.437460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.437505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.437632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.437658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.437804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.437829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.437962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.437989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.438143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.438175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.438286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.438311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.438459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.438484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.438606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.438632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.438775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.438801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.438931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.438957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.751 qpair failed and we were unable to recover it. 00:33:00.751 [2024-07-15 10:06:17.439101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.751 [2024-07-15 10:06:17.439127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.439247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.439272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.439438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.439469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.439645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.439670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.439830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.439856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.440001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.440026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.440145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.440171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.440313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.440338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.440507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.440535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.440715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.440741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.440885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.440912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.441067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.441092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.441249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.441275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.441431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.441456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.441609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.441634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.441776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.441802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.441955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.441981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.442104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.442130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.442289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.442315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.442463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.442488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.442629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.442655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.442781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.442807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.442984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.443011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.443133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.443158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.443283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.443308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.443454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.443479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.443646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.443671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.443805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.443831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.443983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.444009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.444123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.444148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.444292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.444318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.444462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.444487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.444638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.444664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.444780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.444806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.444933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.444959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.445111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.445137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.445293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.445318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.445454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.445480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.445593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.445619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.445769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.445795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.445973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.446000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.446115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.446140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.446281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.446307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.446419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.446448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.446599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.446625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.446745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.446771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.446931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.446957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.447116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.447141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.447272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.447298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.447440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.447465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.447582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.447607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.447772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.447798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.447919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.447946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.448073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.448098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.448249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.448275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.448389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.448414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.448524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.448549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.448680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.448707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.448856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.448895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.449014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.449040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.449163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.449195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.449336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.449362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.449507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.449532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.449686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.449712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.449851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.449898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.450074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.450100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.450227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.450253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.450375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.450401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.450548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.450573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.450708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.450734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.450844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.450873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.451022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.451047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.451199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.451225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.451400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.451425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.451539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.752 [2024-07-15 10:06:17.451564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.752 qpair failed and we were unable to recover it. 00:33:00.752 [2024-07-15 10:06:17.451712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.451738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.451907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.451933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.452052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.452078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.452206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.452232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.452359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.452384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.452510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.452536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.452656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.452682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.452833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.452858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.452988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.453013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.453139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.453175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.453325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.453351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.453463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.453488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.453610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.453636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.453776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.453802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.453950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.453976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.454129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.454155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.454269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.454294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.454410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.454436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.454605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.454630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.454767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.454796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.454975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.455003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.455191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.455221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.455380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.455409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.455583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.455611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.455801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.455830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.456003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.456032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.456185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.456213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.456413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.456447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.456625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.456653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.456810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.456838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.457017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.457046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.457223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.457251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.457430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.457458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.457651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.457680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.457874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.457924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.458100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.458126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.458259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.458288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.458428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.458454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.458613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.458639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.458771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.458796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.458932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.458959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.459111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.459137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.459274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.459300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.459445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.459470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.459589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.459618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.459743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.459769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.459951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.459977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.460095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.460121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.460251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.460277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.460398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.460423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.460550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.460575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.460702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.460728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.460889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.460915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.461066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.461091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.461253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.461278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.461394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.461419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.461591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.461616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.461762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.461788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.461905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.461932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.462062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.462088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.462269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.462295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.462468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.462493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.462663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.462689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.462838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.462898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.753 qpair failed and we were unable to recover it. 00:33:00.753 [2024-07-15 10:06:17.463031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.753 [2024-07-15 10:06:17.463056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.463182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.463207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.463380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.463406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.463586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.463611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.463755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.463780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.463943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.463970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.464111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.464136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.464260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.464285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.464430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.464455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.464572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.464598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.464762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.464788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.464942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.464969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.465093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.465120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.465268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.465293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.465470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.465496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.465642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.465668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.465779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.465804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.465942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.465968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.466115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.466140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.466286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.466312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.466425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.466451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.466621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.466647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.466768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.466794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.466975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.467001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.467147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.467173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.467343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.467368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.467540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.467565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.467680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.467706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.467824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.467849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.468031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.468056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.468205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.468231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.468351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.468376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.468500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.468526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.468668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.468693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.468839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.468864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.469003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.469029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.469173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.469198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.469348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.469373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.469522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.469547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.469664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.469689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.469834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.469862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.469995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.470021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.470188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.470213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.470388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.470414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.470564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.470590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.470762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.470787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.470903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.470929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.471109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.471134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.471262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.471287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.471410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.471436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.471591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.471616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.471762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.471787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.471939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.471966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.472085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.472112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.472260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.472286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.472442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.472468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.472588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.472613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.472754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.472779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.472892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.472918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.473074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.473099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.473247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.473273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.473430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.473455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.473607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.473632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.473780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.754 [2024-07-15 10:06:17.473805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.754 qpair failed and we were unable to recover it. 00:33:00.754 [2024-07-15 10:06:17.473950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.473976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.474121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.474146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.474300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.474325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.474449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.474480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.474630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.474656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.474829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.474854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.475009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.475034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.475160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.475186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.475331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.475356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.475478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.475503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.475628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.475653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.475800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.475825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.475947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.475974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.476120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.476145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.476316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.476342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.476484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.476509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.476659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.476685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.476859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.476891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.477048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.477074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.477224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.477249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.477420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.477445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.477614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.477643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.477780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.477805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.477979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.478005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.478172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.478198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.478358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.478383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.478528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.478553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.478728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.478753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.478911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.478937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.479113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.479139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.479283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.479309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.479500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.479525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.479699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.479725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.479884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.479909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.480032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.480058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.480204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.480230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.480374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.480399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.480559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.480585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.480731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.480756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.480889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.480915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.481093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.481119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.481242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.481268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.481394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.481419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.481561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.481587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.481730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.481758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.481923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.481950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.482077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.482103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.482258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.482284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.482431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.482457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.482606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.482632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.482778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.482803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.482973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.482999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.483148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.483174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.483314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.483339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.483478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.483503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.483678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.483704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.483845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.483870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.483991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.484016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.484163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.484188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.484360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.484385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.484528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.484553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.484729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.484754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.484861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.484894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.485014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.485039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.485188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.485214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.485353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.485378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.485499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.485525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.485697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.755 [2024-07-15 10:06:17.485723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.755 qpair failed and we were unable to recover it. 00:33:00.755 [2024-07-15 10:06:17.485865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.485896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.486039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.486065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.486181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.486207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.486352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.486377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.486556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.486581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.486760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.486785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.486938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.486965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.487121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.487147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.487322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.487347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.487488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.487513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.487664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.487690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.487861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.487894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.488081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.488106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.488226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.488252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.488377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.488403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.488542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.488567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.488720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.488746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.488893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.488920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.489080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.489106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.489230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.489255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.489383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.489409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.489556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.489582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.489697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.489722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.489871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.489903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.490041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.490066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.490178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.490204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.490353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.490379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.490526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.490552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.490703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.490728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.490901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.490927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.491069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.491094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.491238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.491263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.491416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.491442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.491613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.491638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.491750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.491775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.491932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.491958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.492109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.492134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.492277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.492302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.492454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.492479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.492619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.492644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.492835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.492863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.493079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.493108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.493305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.493333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.493487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.493515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.493726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.493758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.493938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.493967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.494132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.494157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.494275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.494300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.494451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.494476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.494597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.494624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.494802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.494828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.494978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.495004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.495118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.495144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.495294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.495320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.495457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.495483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.495631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.495657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.495767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.495793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.495966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.495993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.496115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.496141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.496298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.496324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.756 [2024-07-15 10:06:17.496448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.756 [2024-07-15 10:06:17.496484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.756 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.496626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.496652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.496814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.496842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.497090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.497120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.497351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.497379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.497569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.497597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.497801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.497829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.498024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.498053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.498294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.498342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.498535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.498564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.498703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.498728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.498892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.498919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.499070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.499096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.499240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.499265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.499404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.499430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.499602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.499627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:00.757 [2024-07-15 10:06:17.499747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.757 [2024-07-15 10:06:17.499772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:00.757 qpair failed and we were unable to recover it. 00:33:01.049 [2024-07-15 10:06:17.499936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.049 [2024-07-15 10:06:17.499963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.049 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.500135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.500161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.500281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.500308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.500425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.500451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.500592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.500617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.500767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.500792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.500953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.500979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.501151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.501177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.501299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.501328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.501477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.501502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.501650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.501676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.501820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.501845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.501976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.502002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.502150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.502176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.502345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.502371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.502516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.502541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.502673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.502698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.502838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.502863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.502989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.503015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.503186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.503211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.503332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.503357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.503488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.503514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.503693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.503719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.503832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.503857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.503988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.504014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.504174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.504199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.504315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.504340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.504466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.504492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.504609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.504635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.504779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.504804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.504977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.505004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.505162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.505187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.505331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.505356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.505485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.505511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.505661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.505686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.505826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.505855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.505991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.506017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.506135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.506160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.506329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.506354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.506475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.506501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.506641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.050 [2024-07-15 10:06:17.506666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.050 qpair failed and we were unable to recover it. 00:33:01.050 [2024-07-15 10:06:17.506826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.506854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.507034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.507060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.507201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.507226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.507350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.507377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.507583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.507621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.507821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.507849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.508040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.508068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.508277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.508305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.508496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.508524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.508705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.508733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.508902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.508929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.509115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.509143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.509330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.509360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.509597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.509646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.509830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.509858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.510053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.510081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.510325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.510374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.510546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.510574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.510706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.510731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.510899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.510942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.511155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.511184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.511365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.511394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.511598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.511624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.511742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.511767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.511892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.511918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.512067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.512093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.512258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.512283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.512402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.512428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.512564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.512590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.512739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.512765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.512910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.512937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.513079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.513105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.513232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.513258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.513396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.513421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.513544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.513570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.513743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.513772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.513894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.513920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.514069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.514094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.514240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.514266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.514403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.514428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.514587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.051 [2024-07-15 10:06:17.514613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.051 qpair failed and we were unable to recover it. 00:33:01.051 [2024-07-15 10:06:17.514758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.514783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.514966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.514992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.515120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.515146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.515324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.515349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.515474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.515499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.515671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.515697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.515804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.515830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.515975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.516001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.516150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.516175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.516321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.516347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.516488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.516513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.516676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.516702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.516851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.516884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.517023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.517049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.517216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.517242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.517351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.517376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.517520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.517545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.517723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.517748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.517869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.517901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.518023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.518049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.518198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.518224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.518348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.518376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.518518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.518543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.518654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.518680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.518792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.518817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.519000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.519026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.519169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.519194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.519348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.519373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.519491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.519516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.519638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.519664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.519814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.519839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.519993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.520019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.520142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.520167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.520307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.520332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.520477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.520502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.520650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.520676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.520846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.520872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.520996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.521022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.521170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.521196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.521366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.521392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.052 [2024-07-15 10:06:17.521514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.052 [2024-07-15 10:06:17.521539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.052 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.521688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.521713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.521852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.521886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.522013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.522038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.522217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.522242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.522365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.522390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.522539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.522564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.522725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.522750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.522925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.522951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.523102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.523128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.523306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.523331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.523480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.523506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.523631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.523656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.523808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.523834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.523991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.524017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.524163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.524190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.524345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.524370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.524510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.524536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.524686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.524710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.524888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.524914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.525072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.525098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.525270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.525295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.525445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.525474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.525618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.525644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.525765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.525790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.525973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.526000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.526107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.526132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.526270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.526295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.526449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.526475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.526628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.526653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.526822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.526850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.527044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.527073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.527261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.527289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.527443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.527471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.527683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.527712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.527888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.527932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.528101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.528126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.528273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.528298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.528414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.528439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.528550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.528575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.528800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.528826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.053 qpair failed and we were unable to recover it. 00:33:01.053 [2024-07-15 10:06:17.528988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.053 [2024-07-15 10:06:17.529015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.529186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.529211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.529333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.529358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.529506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.529533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.529680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.529705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.529853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.529885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.530066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.530092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.530213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.530238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.530353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.530382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.530498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.530523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.530694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.530719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.530845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.530872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.531009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.531035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.531207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.531232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.531408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.531433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.531579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.531605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.531750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.531775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.531925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.531951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.532067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.532093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.532230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.532255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.532400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.532425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.532572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.532597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.532744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.532769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.532949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.532975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.533198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.533224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.533367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.533392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.533566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.533591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.533763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.533789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.533937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.533963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.534113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.534139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.534286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.534313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.534462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.534488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.534647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.534672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.534814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.054 [2024-07-15 10:06:17.534839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.054 qpair failed and we were unable to recover it. 00:33:01.054 [2024-07-15 10:06:17.535013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.535039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.535265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.535290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.535518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.535543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.535715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.535740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.535890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.535916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.536031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.536056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.536199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.536225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.536366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.536391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.536559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.536584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.536734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.536759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.536904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.536930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.537102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.537128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.537275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.537300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.537427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.537452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.537626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.537651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.537764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.537792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.537918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.537944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.538058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.538083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.538206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.538231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.538370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.538395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.538566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.538590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.538704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.538728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.538899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.538926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.539071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.539095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.539215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.539240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.539417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.539442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.539619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.539643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.539795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.539820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.539974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.539999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.540142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.540167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.540341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.540366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.540509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.540534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.540714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.540739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.540889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.540915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.541064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.541089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.541311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.541336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.541450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.541475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.541653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.541678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.541825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.541852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.541983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.542009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.055 [2024-07-15 10:06:17.542158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.055 [2024-07-15 10:06:17.542183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.055 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.542336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.542361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.542509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.542534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.542686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.542711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.542869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.542906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.543034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.543059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.543180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.543205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.543345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.543370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.543520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.543545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.543666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.543691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.543809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.543834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.544016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.544043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.544221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.544245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.544393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.544418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.544566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.544593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.544765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.544791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.544916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.544943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.545057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.545083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.545198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.545224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.545373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.545399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.545541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.545566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.545718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.545743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.545859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.545890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.546060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.546086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.546268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.546293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.546440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.546465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.546611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.546636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.546753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.546779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.546902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.546928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.547049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.547074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.547208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.547233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.547377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.547402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.547555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.547580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.547732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.547757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.547905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.547931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.548055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.548080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.548223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.548249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.548376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.548401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.548527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.548552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.548728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.548753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.548881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.548907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.549034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.056 [2024-07-15 10:06:17.549059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.056 qpair failed and we were unable to recover it. 00:33:01.056 [2024-07-15 10:06:17.549181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.549206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.549354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.549383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.549505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.549531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.549650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.549675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.549815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.549840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.550006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.550032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.550177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.550203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.550377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.550402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.550552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.550577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.550751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.550777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.550932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.550958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.551102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.551127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.551245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.551270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.551386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.551411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.551528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.551554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.551685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.551710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.551871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.551915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.552065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.552091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.552235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.552260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.552394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.552419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.552591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.552616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.552761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.552787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.552956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.552982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.553130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.553156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.553315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.553341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.553520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.553545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.553692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.553717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.553868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.553899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.554069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.554095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.554249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.554276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.554452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.554478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.554620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.554645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.554789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.554815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.554991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.555017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.555194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.555219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.555393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.555418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.555572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.555597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.555766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.555792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.555931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.555956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.556134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.556159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.057 [2024-07-15 10:06:17.556329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.057 [2024-07-15 10:06:17.556354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.057 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.556504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.556529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.556714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.556740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.556887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.556913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.557071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.557096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.557249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.557275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.557449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.557474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.557617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.557642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.557783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.557808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.557953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.557979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.558101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.558126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.558276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.558301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.558473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.558498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.558640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.558666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.558848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.558874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.558992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.559017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.559160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.559185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.559336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.559361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.559558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.559583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.559728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.559753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.559875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.559915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.560091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.560116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.560262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.560287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.560426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.560452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.560621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.560646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.560785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.560810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.560960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.560986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.561129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.561154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.561301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.561326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.561498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.561527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.561698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.561724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.561894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.561920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.562075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.562100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.562275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.562300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.562474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.562499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.562677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.562702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.562815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.058 [2024-07-15 10:06:17.562840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.058 qpair failed and we were unable to recover it. 00:33:01.058 [2024-07-15 10:06:17.563013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.563039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.563160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.563185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.563354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.563380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.563547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.563572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.563757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.563782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.563953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.563979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.564156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.564182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.564301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.564327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.564504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.564529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.564640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.564665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.564810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.564835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.565017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.565043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.565182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.565207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.565318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.565343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.565467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.565492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.565621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.565646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.565823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.565848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.565976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.566002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.566124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.566149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.566296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.566321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.566465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.566490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.566670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.566696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.566843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.566868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.566987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.567013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.567153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.567178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.567300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.567326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.567447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.567472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.567594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.567619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.567743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.567768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.567903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.567929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.568042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.568068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.568245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.568270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.568421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.568446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.568591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.568630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.568745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.568770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.568920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.568947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.569085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.569111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.569237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.569262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.569412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.569437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.569611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.569636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.569777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.059 [2024-07-15 10:06:17.569802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.059 qpair failed and we were unable to recover it. 00:33:01.059 [2024-07-15 10:06:17.569949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.569975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.570123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.570148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.570265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.570289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.570437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.570462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.570583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.570609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.570729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.570756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.570916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.570943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.571067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.571092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.571236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.571262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.571375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.571401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.571557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.571582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.571731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.571758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.571942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.571968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.572086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.572112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.572274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.572300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.572444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.572469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.572613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.572639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.572786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.572813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.572991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.573017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.573163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.573191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.573340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.573366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.573508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.573533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.573653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.573678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.573821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.573847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.574012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.574038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.574185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.574210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.574383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.574408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.574526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.574552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.574673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.574698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.574884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.574910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.575051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.575076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.575230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.575255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.575411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.575437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.575566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.575593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.575734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.575760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.575916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.575942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.576085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.576110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.576252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.576277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.576452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.576477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.576628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.576653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.576798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.576823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.060 qpair failed and we were unable to recover it. 00:33:01.060 [2024-07-15 10:06:17.576978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.060 [2024-07-15 10:06:17.577005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.577181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.577207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.577360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.577385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.577503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.577528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.577698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.577723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.577865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.577897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.578041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.578067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.578210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.578235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.578378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.578403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.578590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.578615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.578757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.578782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.578931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.578957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.579077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.579102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.579272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.579298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.579410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.579435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.579560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.579586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.579731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.579756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.579907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.579933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.580083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.580108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.580220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.580249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.580396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.580421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.580592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.580617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.580754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.580779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.580952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.580979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.581100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.581125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.581238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.581264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.581412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.581437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.581568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.581593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.581737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.581762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.581881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.581907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.582031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.582057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.582196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.582222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.582392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.582417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.582540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.582565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.582709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.582735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.582848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.582873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.583025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.583050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.583177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.583202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.583349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.583374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.583522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.583547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.583717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.583742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.061 qpair failed and we were unable to recover it. 00:33:01.061 [2024-07-15 10:06:17.583866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.061 [2024-07-15 10:06:17.583910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.584067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.584092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.584266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.584291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.584435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.584460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.584613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.584639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.584794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.584822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.584965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.584992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.585118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.585144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.585287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.585312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.585479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.585504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.585621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.585646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.585820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.585845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.586001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.586027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.586200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.586225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.586369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.586394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.586542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.586567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.586717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.586742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.586894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.586920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.587066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.587092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.587211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.587236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.587409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.587434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.587605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.587630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.587809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.587834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.587975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.588001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.588151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.588178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.588357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.588382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.588531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.588557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.588696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.588721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.588895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.588922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.589070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.589095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.589244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.589269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.589421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.589447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.589588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.589613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.589769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.589794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.589974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.590000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.062 qpair failed and we were unable to recover it. 00:33:01.062 [2024-07-15 10:06:17.590147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.062 [2024-07-15 10:06:17.590173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.590316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.590341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.590452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.590477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.590594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.590621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.590773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.590799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.590950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.590976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.591088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.591113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.591240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.591266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.591440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.591465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.591612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.591637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.591784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.591810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.591964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.591994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.592168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.592194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.592338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.592363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.592508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.592533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.592659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.592684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.592801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.592826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.592948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.592974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.593085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.593110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.593261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.593286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.593396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.593421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.593572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.593598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.593752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.593777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.593892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.593918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.594063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.594088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.594236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.594261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.594430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.594456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.594626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.594651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.594770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.594795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.594936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.594962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.595130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.595155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.595303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.595328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.595450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.595475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.595594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.063 [2024-07-15 10:06:17.595621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.063 qpair failed and we were unable to recover it. 00:33:01.063 [2024-07-15 10:06:17.595793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.595818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.595941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.595967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.596079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.596104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.596227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.596254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.596408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.596437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.596585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.596609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.596720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.596745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.596890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.596917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.597084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.597112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.597275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.597303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.597473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.597501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.597619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.597662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.597827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.597856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.598010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.598036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.598161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.598187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.598354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.598383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.598557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.598582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.598734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.598760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.598914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.598941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.599055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.599081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.599208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.599249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.599384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.599412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.599547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.599573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.599690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.599716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.599902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.599933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.600107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.600134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.600278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.600323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.600466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.600495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.600670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.600696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.600868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.600913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.601078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.601107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.601251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.601278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.601403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.601429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.601574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.601600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.601744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.601773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.601924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.064 [2024-07-15 10:06:17.601952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.064 qpair failed and we were unable to recover it. 00:33:01.064 [2024-07-15 10:06:17.602079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.602106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.602251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.602277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.602385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.602411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.602559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.602586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.602714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.602739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.602864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.602896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.603022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.603048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.603191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.603217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.603356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.603382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.603525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.603554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.603707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.603733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.603919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.603946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.604064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.604090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.604213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.604239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.604388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.604414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.604559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.604585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.604700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.604726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.604904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.604930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.605077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.605103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.605252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.605278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.605439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.605466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.605624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.605650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.605789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.605815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.605961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.605988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.606205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.606234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.606381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.606407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.606585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.606611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.606729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.606755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.606886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.606913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.607028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.607055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.607181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.607207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.607349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.607375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.607499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.607525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.607648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.607674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.607822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.607848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.607979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.608007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.608121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.608147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.608269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.608295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.608413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.608439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.608609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.608635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.608762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.065 [2024-07-15 10:06:17.608788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.065 qpair failed and we were unable to recover it. 00:33:01.065 [2024-07-15 10:06:17.608934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.608961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.609085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.609111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.609235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.609261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.609425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.609451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.609564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.609590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.609732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.609758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.609938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.609965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.610126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.610153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.610301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.610326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.610481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.610508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.610655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.610682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.610808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.610834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.611003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.611029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.611150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.611176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.611319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.611345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.611494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.611520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.611648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.611674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.611793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.611819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.611942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.611969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.612086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.612113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.612266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.612292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.612440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.612466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.612606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.612632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.612781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.612807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.612980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.613006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.613126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.613152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.613294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.613320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.613491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.613517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.613662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.613688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.613818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.613843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.613977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.614004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.614118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.614144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.614265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.614291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.614439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.614465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.614583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.614609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.614791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.614817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.614947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.614979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.615120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.615146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.615293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.066 [2024-07-15 10:06:17.615319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.066 qpair failed and we were unable to recover it. 00:33:01.066 [2024-07-15 10:06:17.615466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.615492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.615604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.615630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.615783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.615809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.615951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.615978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.616149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.616175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.616323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.616349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.616499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.616525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.616673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.616700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.616831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.616857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.616988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.617015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.617188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.617214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.617349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.617376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.617498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.617525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.617677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.617703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.617855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.617897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.618021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.618047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.618176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.618202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.618332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.618358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.618502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.618528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.618650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.618676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.618827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.618853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.619008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.619035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.619150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.619176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.619298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.619325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.619479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.619505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.619659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.619685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.619833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.619859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.620013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.620040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.620213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.620243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.067 qpair failed and we were unable to recover it. 00:33:01.067 [2024-07-15 10:06:17.620435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.067 [2024-07-15 10:06:17.620461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.620595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.620624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.620812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.620841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.621019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.621046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.621188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.621214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.621385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.621414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.621544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.621570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.621741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.621783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.621947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.621978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.622119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.622149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.622313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.622342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.622530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.622559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.622753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.622779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.622896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.622943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.623098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.623127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.623298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.623324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.623464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.623490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.623660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.623703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.623864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.623901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.624061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.624087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.624223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.624252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.624417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.624443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.624633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.624661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.624829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.624858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.625041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.625068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.625233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.625262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.625447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.625476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.625644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.625670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.625834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.625862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.626034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.626063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.626212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.626238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.626379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.626405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.626557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.626583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.626756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.626782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.626912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.626939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.627088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.627114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.627256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.627286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.627434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.627460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.627632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.627658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.068 qpair failed and we were unable to recover it. 00:33:01.068 [2024-07-15 10:06:17.627770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.068 [2024-07-15 10:06:17.627796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.627956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.627983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.628102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.628129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.628320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.628346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.628516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.628545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.628704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.628733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.628901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.628929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.629077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.629103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.629286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.629312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.629428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.629454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.629626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.629652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.629830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.629856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.630036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.630063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.630228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.630257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.630447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.630476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.630668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.630694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.630837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.630866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.631031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.631060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.631223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.631249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.631419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.631448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.631590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.631619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.631795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.631820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.631972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.631999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.632192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.632221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.632383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.632409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.632564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.632590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.632786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.632815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.632955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.632982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.633135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.633161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.633353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.633381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.633548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.633573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.633726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.633755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.633953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.633982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.634170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.634196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.634355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.634381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.634571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.634599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.634765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.634791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.634988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.635017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.635174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.635206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.069 [2024-07-15 10:06:17.635377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.069 [2024-07-15 10:06:17.635403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.069 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.635567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.635595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.635772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.635798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.635920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.635947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.636138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.636166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.636302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.636331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.636500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.636526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.636700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.636729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.636890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.636951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.637100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.637126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.637276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.637319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.637481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.637510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.637677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.637703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.637836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.637862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.637985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.638012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.638162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.638188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.638339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.638365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.638503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.638533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.638706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.638732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.638884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.638911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.639108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.639137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.639302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.639328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.639490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.639519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.639685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.639713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.639888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.639915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.640032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.640075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.640261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.640293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.640463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.640489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.640615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.640641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.640814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.640840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.640993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.641021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.641172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.641198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.641335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.641377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.641550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.641576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.641728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.641754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.641895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.641922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.642075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.642101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.642268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.642297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.642451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.642480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.642618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.642644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.642801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.642828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.070 [2024-07-15 10:06:17.642969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.070 [2024-07-15 10:06:17.642996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.070 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.643136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.643163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.643322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.643351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.643505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.643534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.643678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.643704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.643834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.643860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.644037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.644066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.644266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.644292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.644461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.644490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.644654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.644683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.644827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.644853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.645053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.645080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.645251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.645280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.645452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.645479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.645621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.645665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.645817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.645847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.646033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.646059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.646178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.646205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.646335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.646364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.646534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.646560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.646710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.646753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.646889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.646919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.647068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.647094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.647216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.647242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.647420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.647448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.647645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.647671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.647861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.647901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.648097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.648125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.648321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.648347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.648506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.648534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.648664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.648692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.648902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.648929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.649064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.649093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.649249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.649278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.649441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.649466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.649631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.649660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.649818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.649847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.650025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.650051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.650219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.650247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.650418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.650446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.650618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.650644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.650802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.071 [2024-07-15 10:06:17.650831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.071 qpair failed and we were unable to recover it. 00:33:01.071 [2024-07-15 10:06:17.651006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.651033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.651158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.651185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.651351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.651379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.651536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.651564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.651732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.651759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.651883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.651927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.652068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.652097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.652272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.652299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.652491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.652520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.652684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.652714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.652894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.652921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.653073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.653103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.653255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.653282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.653404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.653430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.653574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.653618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.653806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.653835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.654028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.654055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.654179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.654205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.654350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.654376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.654493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.654519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.654658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.654701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.654891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.654920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.655122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.655148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.655348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.655377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.655531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.655559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.655728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.655754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.655955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.655985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.656153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.656182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.656351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.656376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.656567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.656595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.656757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.656786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.656922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.656950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.657101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.657146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.657305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.657334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.657526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.657552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.072 qpair failed and we were unable to recover it. 00:33:01.072 [2024-07-15 10:06:17.657722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.072 [2024-07-15 10:06:17.657750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.657906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.657936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.658136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.658162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.658326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.658355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.658491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.658520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.658712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.658738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.658909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.658939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.659082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.659111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.659287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.659313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.659440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.659466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.659583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.659609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.659752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.659778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.659939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.659969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.660167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.660196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.660383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.660409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.660524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.660550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.660759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.660788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.660954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.660984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.661180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.661209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.661392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.661421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.661557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.661583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.661774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.661803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.661971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.661999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.662174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.662200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.662369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.662398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.662557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.662586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.662775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.662801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.662941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.662971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.663125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.663154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.663350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.663376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.663563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.663592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.663717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.663746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.663942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.663968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.664160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.664189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.664362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.664397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.664537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.664563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.664761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.664790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.664942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.664972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.665116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.665142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.665287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.665329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.665518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.073 [2024-07-15 10:06:17.665547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.073 qpair failed and we were unable to recover it. 00:33:01.073 [2024-07-15 10:06:17.665689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.665714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.665865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.665897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.666050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.666077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.666251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.666277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.666449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.666477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.666635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.666664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.666858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.666891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.667063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.667092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.667256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.667286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.667477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.667504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.667620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.667663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.667817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.667845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.668008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.668034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.668207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.668248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.668413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.668442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.668589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.668615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.668756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.668799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.668957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.668987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.669165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.669192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.669343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.669369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.669536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.669564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.669737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.669763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.669913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.669939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.670115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.670141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.670294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.670320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.670442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.670467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.670640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.670669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.670827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.670853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.670989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.671015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.671159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.671200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.671344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.671370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.671528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.671572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.671737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.671766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.671954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.671981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.672147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.672177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.672363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.672392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.672526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.672552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.672699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.672742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.672890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.672920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.673061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.673087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.673255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.673298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.074 qpair failed and we were unable to recover it. 00:33:01.074 [2024-07-15 10:06:17.673456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.074 [2024-07-15 10:06:17.673484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.673654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.673680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.673825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.673851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.674032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.674066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.674207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.674233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.674422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.674451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.674628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.674655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.674834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.674860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.675047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.675076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.675203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.675231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.675396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.675422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.675589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.675617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.675773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.675802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.675944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.675971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.676082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.676108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.676315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.676341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.676460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.676486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.676646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.676672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.676815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.676841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.677001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.677027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.677220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.677249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.677370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.677399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.677543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.677569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.677715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.677740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.677886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.677913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.678071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.678097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.678266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.678294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.678479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.678508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.678670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.678696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.678838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.678892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.679083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.679112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.679292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.679318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.679464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.679508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.679649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.679678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.679885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.679911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.680062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.680088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.680284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.680313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.680460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.680487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.680603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.680630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.680842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.680903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.681119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.681145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.075 [2024-07-15 10:06:17.681282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.075 [2024-07-15 10:06:17.681311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.075 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.681476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.681505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.681674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.681701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.681856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.681894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.682045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.682071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.682220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.682246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.682400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.682429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.682623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.682651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.682802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.682828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.682979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.683006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.683129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.683155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.683291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.683317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.683509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.683538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.683727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.683756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.683920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.683946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.684093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.684119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.684301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.684330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.684492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.684518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.684707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.684736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.684893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.684923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.685086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.685112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.685301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.685330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.685500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.685529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.685693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.685718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.685870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.685902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.686024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.686050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.686196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.686222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.686426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.686454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.686624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.686653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.686816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.686842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.687000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.687030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.687200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.687228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.687430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.687456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.687585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.687611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.687755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.687781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.687969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.687996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.076 [2024-07-15 10:06:17.688187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.076 [2024-07-15 10:06:17.688215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.076 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.688367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.688396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.688561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.688587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.688711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.688738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.688930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.688957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.689144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.689170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.689307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.689336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.689498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.689526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.689694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.689720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.689906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.689936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.690073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.690102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.690238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.690264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.690420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.690446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.690569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.690595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.690704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.690730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.690918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.690948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.691080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.691109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.691275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.691300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.691419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.691445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.691620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.691649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.691791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.691817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.691952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.691979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.692150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.692179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.692336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.692362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.692504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.692546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.692683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.692713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.692860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.692900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.693064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.693093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.693260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.693288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.693452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.693478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.693621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.693664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.693794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.693822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.694015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.694042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.694239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.694268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.694423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.694452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.694645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.694677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.694848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.694883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.695038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.695066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.695205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.695232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.695423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.695452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.695616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.695645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.077 [2024-07-15 10:06:17.695841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.077 [2024-07-15 10:06:17.695868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.077 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.696037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.696065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.696194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.696223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.696396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.696422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.696563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.696589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.696753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.696781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.696953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.696980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.697087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.697129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.697263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.697292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.697465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.697491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.697662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.697688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.697854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.697890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.698061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.698087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.698199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.698225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.698407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.698435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.698611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.698637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.698760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.698803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.698958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.698987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.699153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.699179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.699299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.699340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.699474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.699503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.699642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.699671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.699800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.699826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.699989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.700019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.700188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.700214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.700391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.700420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.700612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.700640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.700802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.700830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.701026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.701052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.701204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.701230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.701378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.701404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.701554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.701596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.701718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.701747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.701915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.701942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.702132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.702160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.702332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.702361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.702533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.702559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.702686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.702712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.702910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.702940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.703082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.703108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.703251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.703293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.703465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.078 [2024-07-15 10:06:17.703494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.078 qpair failed and we were unable to recover it. 00:33:01.078 [2024-07-15 10:06:17.703669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.703694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.703804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.703830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.704000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.704030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.704205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.704231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.704426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.704455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.704618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.704648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.704813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.704839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.705030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.705057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.705195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.705223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.705414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.705441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.705601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.705630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.705762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.705791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.705963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.705990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.706156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.706185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.706340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.706369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.706533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.706559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.706681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.706707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.706883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.706912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.707084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.707110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.707297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.707326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.707486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.707518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.707685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.707711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.707828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.707854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.708012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.708042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.708212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.708237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.708415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.708444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.708638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.708667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.708836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.708862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.709026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.709052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.709194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.709221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.709374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.709399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.709561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.709589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.709747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.709775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.709916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.709943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.710063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.710089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.710228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.710257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.710398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.710425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.710580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.710606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.710770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.710800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.710974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.711001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.711130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.711157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.079 qpair failed and we were unable to recover it. 00:33:01.079 [2024-07-15 10:06:17.711295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.079 [2024-07-15 10:06:17.711321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.711500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.711526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.711681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.711710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.711885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.711912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.712060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.712086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.712279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.712308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.712436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.712469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.712666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.712693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.712899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.712929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.713117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.713145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.713286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.713311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.713508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.713537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.713698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.713726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.713866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.713899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.714071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.714100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.714265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.714294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.714464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.714489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.714630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.714656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.714830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.714859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.715007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.715034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.715178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.715219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.715382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.715410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.715570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.715596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.715725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.715750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.715864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.715898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.716081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.716107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.716274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.716302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.716462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.716491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.716630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.716656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.716804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.716831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.716952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.716980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.717154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.717180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.717338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.717367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.717561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.717590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.717761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.717787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.717942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.717972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.080 [2024-07-15 10:06:17.718171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.080 [2024-07-15 10:06:17.718200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.080 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.718373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.718400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.718568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.718594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.718760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.718789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.718964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.718991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.719133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.719176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.719344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.719373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.719569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.719595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.719725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.719753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.719946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.719975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.720147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.720173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.720367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.720400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.720535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.720564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.720755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.720781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.720937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.720966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.721102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.721131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.721262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.721291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.721415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.721441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.721574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.721603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.721791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.721820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.721960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.721987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.722137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.722163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.722334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.722360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.722525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.722554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.722715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.722744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.722894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.722921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.723043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.723069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.723207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.723236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.723404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.723430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.723559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.723600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.723759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.723788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.723978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.724005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.724173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.724203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.724389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.724418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.724609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.724635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.724795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.724821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.725011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.725041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.725180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.725206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.725393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.725422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.725590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.725619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.725752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.725778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.081 [2024-07-15 10:06:17.725920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.081 [2024-07-15 10:06:17.725947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.081 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.726122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.726152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.726325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.726351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.726522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.726548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.726695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.726739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.726901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.726928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.727083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.727109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.727288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.727314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.727495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.727522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.727674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.727703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.727860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.727898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.728064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.728090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.728199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.728240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.728435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.728464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.728652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.728677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.728839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.728868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.729047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.729077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.729221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.729246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.729361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.729386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.729557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.729586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.729754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.729780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.729907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.729952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.730117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.730146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.730284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.730310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.730426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.730452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.730634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.730662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.730837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.730863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.731065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.731094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.731257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.731286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.731457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.731483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.731632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.731675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.731837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.731866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.732040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.732067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.732217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.732259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.732419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.732448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.732619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.732645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.732819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.732847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.733019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.733049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.733199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.733232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.733402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.733432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.733623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.733652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.733814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.082 [2024-07-15 10:06:17.733843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.082 qpair failed and we were unable to recover it. 00:33:01.082 [2024-07-15 10:06:17.734021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.734048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.734214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.734242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.734436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.734462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.734637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.734666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.734805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.734835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.734992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.735019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.735168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.735195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.735349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.735378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.735526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.735552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.735703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.735729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.735934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.735961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.736093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.736119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.736284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.736313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.736507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.736533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.736707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.736733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.736901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.736932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.737122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.737151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.737323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.737349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.737481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.737509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.737670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.737699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.737866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.737898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.738066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.738095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.738261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.738290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.738459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.738485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.738683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.738712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.738884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.738913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.739083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.739109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.739236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.739262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.739379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.739406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.739555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.739581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.739753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.739779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.739959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.739989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.740161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.740188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.740386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.740415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.740551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.740579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.740722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.740748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.740925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.740951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.741146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.741191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.741391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.741420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.741582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.741611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.741791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.083 [2024-07-15 10:06:17.741820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.083 qpair failed and we were unable to recover it. 00:33:01.083 [2024-07-15 10:06:17.741973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.742001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.742152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.742194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.742355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.742384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.742533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.742559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.742711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.742755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.742924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.742955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.743128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.743155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.743360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.743389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.743577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.743606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.743804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.743831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.743985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.744016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.744150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.744180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.744379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.744406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.744526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.744553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.744701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.744728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.744943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.744970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.745171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.745201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.745392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.745421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.745583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.745611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.745780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.745810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.745982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.746009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.746163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.746190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.746353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.746382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.746542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.746572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.746710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.746737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.746856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.746889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.747090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.747119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.747290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.747317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.747485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.747514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.747677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.747707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.747906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.747933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.748123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.748153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.748354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.748381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.748503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.748531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.748738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.748767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.748909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.748940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.749106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.749137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.749265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.749292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.749464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.084 [2024-07-15 10:06:17.749491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-07-15 10:06:17.749665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.749691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.749902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.749930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.750082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.750109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.750224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.750251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.750393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.750436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.750604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.750635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.750799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.750826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.750995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.751025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.751217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.751247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.751412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.751440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.751601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.751631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.751835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.751864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.752071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.752098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.752225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.752252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.752381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.752412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.752596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.752623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.752794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.752824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.753017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.753047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.753219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.753246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.753440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.753470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.753670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.753697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.753813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.753840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.753967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.753995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.754209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.754239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.754437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.754464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.754636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.754665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.754800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.754830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.755006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.755034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.755229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.755259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.755420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.755450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.755602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.755630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.755804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.755847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.756031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.756058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.756209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.756236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.756383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.756427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.756559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.756589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-07-15 10:06:17.756733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.085 [2024-07-15 10:06:17.756761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.756911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.756960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.757123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.757152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.757323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.757349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.757519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.757549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.757685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.757714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.757914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.757941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.758110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.758139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.758313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.758342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.758537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.758564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.758758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.758787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.758976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.759006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.759180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.759208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.759361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.759388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.759549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.759576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.759724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.759751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.759908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.759939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.760089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.760119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.760317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.760344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.760541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.760571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.760708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.760739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.760911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.760938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.765068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.765113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.765317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.765347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.765527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.765555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.765726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.765755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.765947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.765978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.766144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.766172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.766344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.766374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.766536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.766565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.766734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.766761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.766926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.766957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.767124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.767154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.767350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.767377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.767569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.767599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.767797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.767824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.768007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.768034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.768170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.768200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.768358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.768388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.768531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.768558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.768705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.768749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-07-15 10:06:17.768921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.086 [2024-07-15 10:06:17.768957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.769135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.769162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.769321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.769350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.769522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.769551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.769718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.769745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.770017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.770047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.770213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.770242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.770434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.770461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.770630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.770661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.770848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.770885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.771025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.771052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.771216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.771256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.771457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.771487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.771625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.771652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.771809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.771853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.772056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.772085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.772253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.772280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.772488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.772517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.772676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.772706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.772902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.772929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.773131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.773161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.773322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.773352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.773520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.773547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.773664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.773690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.773843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.773873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.774028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.774054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.774229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.774271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.774465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.774495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.774692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.774718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.774890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.774920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.775105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.775134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.775315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.775342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.775516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.775545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.775707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.775737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.775911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.775938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.776086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.776113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.776326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.776356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.776514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.776541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.776813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.776842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.777048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.777078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.777218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.087 [2024-07-15 10:06:17.777248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.087 qpair failed and we were unable to recover it. 00:33:01.087 [2024-07-15 10:06:17.777395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.777423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.777630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.777659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.777831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.777858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.778056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.778087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.778248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.778278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.778475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.778502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.778671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.778701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.778864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.778900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.779066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.779092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.779238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.779281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.779427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.779456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.779597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.779624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.779739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.779766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.779976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.780004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.780177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.780204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.780398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.780427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.780615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.780645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.780821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.780848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.781059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.781089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.781279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.781309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.781472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.781498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.781644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.781688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.781874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.781912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.782083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.782110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.782302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.782332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.782503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.782533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.782739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.782766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.782919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.782947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.783140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.783170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.783310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.783338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.783495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.783522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.783671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.783716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.783887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.783915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.784036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.784080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.784240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.784270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.784436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.784463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.784627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.784658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.784817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.784847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.785026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.785054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.785231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.785266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.088 [2024-07-15 10:06:17.785430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.088 [2024-07-15 10:06:17.785460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.088 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.785622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.785649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.785844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.785873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.786045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.786075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.786246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.786273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.786398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.786441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.786634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.786664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.786829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.786856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.787029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.787058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.787252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.787281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.787452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.787478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.787601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.787645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.787816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.787846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.788030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.788058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.788222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.788252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.788441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.788469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.788613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.788641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.788805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.788835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.789047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.789075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.789260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.789287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.789452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.789481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.789671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.789701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.789888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.789915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.790069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.790096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.790263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.790293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.790442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.790469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.790590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.790617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.790819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.790849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.791002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.791030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.791175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.791202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.791349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.791380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.791544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.791571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.791735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.791766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.791962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.791992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.792165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.792192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.792336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.792363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.089 [2024-07-15 10:06:17.792497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.089 [2024-07-15 10:06:17.792527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.089 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.792701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.792728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.792902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.792947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.793108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.793141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.793312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.793339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.793494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.793521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.793669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.793695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.793872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.793906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.794100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.794129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.794260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.794290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.794432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.794459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.794608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.794651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.794810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.794840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.795015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.795043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.795164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.795208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.795371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.795401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.795586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.795612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.795806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.795836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.796039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.796070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.796243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.796270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.796414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.796457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.796597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.796628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.796802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.796839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.797015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.797045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.797236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.797266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.797443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.797470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.797636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.797666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.797797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.797826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.798012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.798039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.798194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.798221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.798356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.798386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.090 [2024-07-15 10:06:17.798551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.090 [2024-07-15 10:06:17.798578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.090 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.798697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.798741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.798913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.798944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.799139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.799166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.799330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.799361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.799551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.799580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.799748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.799775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.799951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.799980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.800116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.800145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.800308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.800335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.800503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.800533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.800733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.800760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.800908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.800940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.801066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.801092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.801260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.801286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.801435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.801461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.801584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.801610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.801732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.801759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.801873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.801905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.802075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.802104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.802265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.802294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.802460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.802487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.802680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.802710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.802836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.802866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.803056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.803083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.803204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.803247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.803439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.803468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.803640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.803667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.803842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.803894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.804030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.804061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.804230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.804257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.804446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.804476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.804640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.804670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.804870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.804905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.805052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.805082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.805243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.805273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.805440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.805466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.805662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.805691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.805840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.805866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.806029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.806056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.806226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.806255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.806418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.806448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.806616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.806642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.806807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.806837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.807039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.807069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.807214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.807242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.807440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.807470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.807634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.807664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.807811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.807838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.808014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.808045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.808235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.808265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.808463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.808490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.808632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.808667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.808858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.808895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.809091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.809118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.809260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.809289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.809445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.809475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.809668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.809695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.809905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.809945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.810122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.810149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.810324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.810350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.810524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.810553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.810741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.810770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.810969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.810997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.811159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.811188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.374 [2024-07-15 10:06:17.811346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.374 [2024-07-15 10:06:17.811376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.374 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.811547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.811573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.811713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.811757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.811946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.811977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.812120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.812146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.812296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.812339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.812529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.812559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.812726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.812752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.812947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.812978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.813168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.813197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.813373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.813399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.813546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.813573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.813733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.813762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.813932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.813960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.814147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.814191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.814368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.814399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.814569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.814595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.814761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.814789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.814981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.815011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.815154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.815180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.815321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.815363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.815529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.815558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.815749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.815775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.815939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.815970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.816168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.816197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.816370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.816396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.816545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.816587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.816748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.816782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.816927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.816954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.817109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.817136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.817258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.817284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.817435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.817461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.817623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.817652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.817807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.817836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.818016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.818042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.818165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.818191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.818309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.818336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.818481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.818507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.818651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.818677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.818850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.818882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.819003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.819029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.819202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.819245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.819381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.819411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.819608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.819635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.819827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.819856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.820058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.820087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.820260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.820286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.820436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.820479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.820611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.820641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.820816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.820842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.820996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.821024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.821189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.821218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.821412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.821438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.821584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.821647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.821809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.821839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.822037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.822064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.822193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.822222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.822386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.822415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.822584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.822610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.822757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.822801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.822960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.822991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.823180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.823206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.823328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.823372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.823531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.823560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.823713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.823742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.823945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.823972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.824086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.824112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.824222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.824253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.824368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.824394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.824566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.824596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.824765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.824792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.824959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.824989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.825173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.825202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.825393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.825419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.825584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.825612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.825738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.825767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.825938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.825965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.826136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.826165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.375 qpair failed and we were unable to recover it. 00:33:01.375 [2024-07-15 10:06:17.826327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.375 [2024-07-15 10:06:17.826356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.826501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.826527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.826683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.826709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.826852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.826889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.827063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.827089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.827258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.827287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.827440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.827469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.827634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.827661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.827828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.827857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.828006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.828037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.828230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.828257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.828382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.828409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.828607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.828637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.828806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.828832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.828964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.828991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.829144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.829170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.829325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.829352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.829490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.829519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.829683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.829712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.829881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.829908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.830026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.830069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.830227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.830256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.830421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.830447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.830617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.830681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.830837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.830866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.831059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.831085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.831242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.831268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.831385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.831412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.831564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.831590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.831739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.831769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.831891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.831918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.832061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.832087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.832251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.832280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.832465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.832493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.832635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.832661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.832835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.832862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.833032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.833062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.833221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.833247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.833426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.833455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.833612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.833641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.833783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.833809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.833933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.833959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.834139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.834168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.834316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.834342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.834490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.834517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.834710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.834739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.834905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.834932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.835047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.835088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.835222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.835251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.835447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.835473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.835667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.835696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.835861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.835897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.836068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.836094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.836283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.836311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.836476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.836506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.836643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.836671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.836864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.836901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.837087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.837113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.837264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.837290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.837438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.837464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.837608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.837635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.837782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.837810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.837979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.838009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.838167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.838196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.838389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.838415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.838573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.838602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.838791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.838820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.838992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.839018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.839178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.376 [2024-07-15 10:06:17.839207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.376 qpair failed and we were unable to recover it. 00:33:01.376 [2024-07-15 10:06:17.839338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.839372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.839571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.839597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.839723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.839750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.839902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.839929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.840045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.840071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.840215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.840241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.840377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.840407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.840559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.840585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.840731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.840775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.840914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.840943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.841112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.841138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.841309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.841354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.841481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.841510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.841657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.841683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.841888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.841918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.842078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.842107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.842264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.842290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.842436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.842478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.842654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.842680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.842830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.842856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.843023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.843050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.843181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.843211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.843400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.843426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.843594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.843623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.843784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.843813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.843976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.844003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.844149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.844191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.844400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.844444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.844649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.844677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.844898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.844925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.845074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.845100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.845225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.845251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.845440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.845469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.845666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.845713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.845913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.845940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.846084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.846113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.846426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.846484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.846662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.846687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.846854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.846889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.847029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.847055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.847213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.847239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.847413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.847442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.847680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.847730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.847904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.847930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.848105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.848134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.848289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.848318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.848484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.848510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.848636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.848662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.848864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.848899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.849064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.849090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.849265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.849293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.849476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.849504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.849675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.849700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.849867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.849901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.850048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.850077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.850204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.850230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.850374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.850417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.850557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.850585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.850777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.850803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.850975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.851005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.851167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.851196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.851403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.851428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.851568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.851596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.851784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.851813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.852008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.852034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.852203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.852231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.852536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.852588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.852774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.852800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.852945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.852975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.853116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.853145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.853314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.853340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.853533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.853561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.853681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.853709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.853748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3a480 (9): Bad file descriptor 00:33:01.377 [2024-07-15 10:06:17.853991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.854031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.854216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.854244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.854422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.854452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.854645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.854672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.377 [2024-07-15 10:06:17.854819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.377 [2024-07-15 10:06:17.854850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.377 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.855024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.855051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.855201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.855228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.855501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.855554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.855794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.855842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.856020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.856047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.856173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.856217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.856413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.856443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.856613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.856640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.856775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.856814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.856991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.857031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.857213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.857240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.857485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.857535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.857777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.857829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.858012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.858039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.858188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.858213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.858372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.858398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.858509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.858540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.858740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.858769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.858988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.859028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.859185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.859212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.859406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.859459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.859718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.859778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.859958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.859985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.860135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.860180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.860321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.860351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.860494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.860521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.860743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.860796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.860967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.860994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.861143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.861169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.861283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.861326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.861611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.861660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.861801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.861827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.862019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.862059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.862242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.862275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.862476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.862503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.862677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.862734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.862899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.862945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.863067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.863095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.863275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.863341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.863598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.863625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.863775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.863803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.863954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.863982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.864126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.864153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.864323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.864362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.864542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.864588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.864734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.864760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.864886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.864914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.865065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.865109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.865259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.865285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.865437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.865466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.865620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.865647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.865824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.865850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.865973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.866000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.866152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.866178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.866331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.866357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.866534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.866579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.866732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.866782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.866963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.866992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.867139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.867182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.867506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.867560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.867723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.867752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.867911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.867957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.868099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.868126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.868318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.868347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.868565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.868596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.868751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.868780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.868906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.868948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.869070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.869096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.378 qpair failed and we were unable to recover it. 00:33:01.378 [2024-07-15 10:06:17.869256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.378 [2024-07-15 10:06:17.869298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.869568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.869620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.869800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.869829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.869983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.870010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.870158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.870206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.870454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.870506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.870685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.870727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.870921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.870948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.871099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.871125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.871344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.871370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.871576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.871602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.871777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.871806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.871965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.871992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.872139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.872183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.872373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.872402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.872608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.872638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.872826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.872854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.873051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.873077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.873228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.873254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.873448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.873477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.873636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.873665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.873814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.873841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.873991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.874018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.874209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.874238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.874475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.874507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.874694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.874723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.874956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.874983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.875096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.875122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.875246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.875288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.875482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.875533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.875663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.875693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.875885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.875930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.876056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.876082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.876225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.876252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.876368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.876411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.876596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.876625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.876785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.876813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.876981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.877008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.877170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.877199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.877376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.877405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.877575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.877605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.877776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.877802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.877983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.878009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.878173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.878202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.878327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.878357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.878627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.878682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.878831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.878860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.879059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.879086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.879228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.879254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.879415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.879444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.879579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.879607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.879795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.879824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.880001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.880027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.880192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.880221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.880419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.880445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.880633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.880666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.880863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.880899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.881059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.881085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.881218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.881244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.881370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.881395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.881646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.881695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.881859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.881896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.882057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.882083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.882233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.882259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.882424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.882452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.882730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.882781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.882980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.883007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.883175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.883204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.883367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.883395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.883591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.883617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.883783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.883813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.883944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.883974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.884143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.884169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.884312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.884338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.884491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.884522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.884669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.884695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.884868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.884899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.885077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.379 [2024-07-15 10:06:17.885106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.379 qpair failed and we were unable to recover it. 00:33:01.379 [2024-07-15 10:06:17.885276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.885304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.885457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.885484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.885625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.885667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.885803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.885829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.885965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.885992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.886120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.886146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.886287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.886313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.886488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.886517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.886676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.886705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.886882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.886909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.887107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.887136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.887323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.887352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.887516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.887542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.887706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.887735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.887891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.887921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.888120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.888146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.888276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.888302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.888448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.888478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.888625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.888652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.888816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.888845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.889010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.889040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.889233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.889260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.889459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.889489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.889675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.889704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.889845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.889871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.889996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.890022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.890222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.890251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.890388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.890414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.890599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.890627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.890812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.890841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.891020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.891048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.891201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.891227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.891380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.891422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.891561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.891587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.891740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.891782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.891954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.891984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.892164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.892190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.892382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.892411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.892543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.892572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.892738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.892764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.892923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.892952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.893115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.893145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.893316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.893342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.893493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.893538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.893667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.893696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.893860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.893890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.894060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.894089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.894254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.894282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.894477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.894503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.894634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.894665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.894837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.894863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.895018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.895044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.895205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.895234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.895431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.895459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.895630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.895655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.895814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.895843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.896057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.896087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.896258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.896289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.896484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.896513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.896678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.896707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.896900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.896927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.897098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.897127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.897289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.897319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.897514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.897540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.897678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.897707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.897859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.897893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.898038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.898064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.898254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.898282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.898418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.898447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.898587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.898613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.898756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.898782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.380 [2024-07-15 10:06:17.898965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.380 [2024-07-15 10:06:17.898996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.380 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.899165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.899192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.899358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.899387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.899508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.899537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.899726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.899755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.899926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.899953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.900077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.900103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.900251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.900276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.900421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.900447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.900636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.900665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.900836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.900862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.901019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.901045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.901221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.901250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.901447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.901473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.901631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.901660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.901816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.901845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.902041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.902068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.902263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.902292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.902450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.902479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.902672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.902698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.902863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.902899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.903057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.903086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.903238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.903264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.903441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.903467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.903610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.903639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.903772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.903798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.903906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.903937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.904109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.904139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.904309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.904336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.904525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.904553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.904685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.904714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.904911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.904938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.905069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.905098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.905287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.905315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.905478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.905504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.905670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.905699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.905879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.905923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.906071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.906097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.906222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.906266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.906452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.906481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.906654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.906680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.906857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.906898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.907060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.907089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.907238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.907264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.907415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.907441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.907610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.907636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.907782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.907808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.907978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.908008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.908172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.908203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.908403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.908429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.908591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.908620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.908782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.908811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.908946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.908973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.909124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.909150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.909306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.909334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.909496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.909522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.909642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.909668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.909874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.909907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.910067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.910093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.910253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.910282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.910475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.910504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.910692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.910718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.910906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.910935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.911136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.911162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.911280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.911306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.911430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.911455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.911611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.911644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.911805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.911831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.911988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.912015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.912164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.912191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.912407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.912432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.912607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.912636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.912803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.912832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.913019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.913047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.913210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.913239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.913393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.381 [2024-07-15 10:06:17.913422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.381 qpair failed and we were unable to recover it. 00:33:01.381 [2024-07-15 10:06:17.913592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.913618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.913738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.913765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.913968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.913998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.914148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.914174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.914380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.914409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.914601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.914630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.914825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.914852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.915000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.915040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.915245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.915275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.915467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.915494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.915647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.915692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.915889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.915934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.916066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.916092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.916269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.916295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.916436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.916465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.916638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.916664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.916828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.916873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.917082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.917109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.917261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.917287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.917439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.917467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.917670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.917698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.917891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.917920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.918076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.918104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.918270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.918299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.918490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.918517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.918685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.918715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.918853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.918900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.919050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.919077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.919225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.919252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.919404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.919434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.919604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.919630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.919775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.919820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.919958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.919986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.920134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.920161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.920333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.920364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.920526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.920557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.920699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.920727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.920923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.920952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.921082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.921111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.921282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.921309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.921476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.921503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.921665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.921695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.921866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.921902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.922025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.922051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.922209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.922236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.922386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.922413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.922564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.922591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.922763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.922793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.922955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.922982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.923148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.923178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.923325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.923356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.923527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.923554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.923677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.923722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.923865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.923902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.924092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.924119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.924313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.924342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.924475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.924505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.924677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.924708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.924887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.924914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.925034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.925061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.925232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.925259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.925471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.925501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.925663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.925693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.925850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.925884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.926007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.926033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.926148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.926175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.926294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.926321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.926462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.926490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.926632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.926676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.926849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.926889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.927063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.927089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.927262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.927293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.927458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.927486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.927608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.927651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.927814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.927845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.928023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.928051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.928219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.928250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.382 [2024-07-15 10:06:17.928439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.382 [2024-07-15 10:06:17.928469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.382 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.928633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.928661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.928861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.928901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.929068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.929094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.929244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.929271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.929423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.929450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.929622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.929649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.929811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.929838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.929995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.930022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.930146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.930200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.930370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.930397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.930558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.930588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.930748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.930779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.930947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.930974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.931117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.931143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.931341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.931371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.931519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.931547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.931744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.931774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.931965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.931995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.932167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.932194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.932309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.932357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.932543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.932573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.932744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.932771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.932943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.932973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.933131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.933161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.933372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.933399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.933660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.933711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.933900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.933931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.934084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.934111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.934274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.934303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.934490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.934519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.934684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.934711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.934908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.934938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.935107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.935136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.935336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.935363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.935557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.935586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.935746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.935775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.935979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.936006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.936177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.936206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.936339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.936368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.936533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.936560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.936724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.936754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.936887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.936927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.937110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.937137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.937304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.937334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.937528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.937558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.937754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.937781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.937952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.937981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.938143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.938172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.938329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.938356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.938529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.938572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.938732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.938761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.938956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.938982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.939109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.939136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.939310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.939338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.939544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.939571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.939735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.939765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.939945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.939973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.940121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.940148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.940322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.940352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.940491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.940524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.940722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.940749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.940911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.940942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.941130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.941157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.941310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.941337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.941463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.941507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.941696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.941726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.941893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.941929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.942058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.942101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.942265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.942295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.942474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.383 [2024-07-15 10:06:17.942501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.383 qpair failed and we were unable to recover it. 00:33:01.383 [2024-07-15 10:06:17.942619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.942665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.942820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.942849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.943050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.943078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.943281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.943310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.943499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.943529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.943727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.943754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.943944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.943975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.944139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.944168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.944335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.944361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.944474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.944517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.944705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.944735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.944936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.944963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.945134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.945164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.945298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.945328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.945488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.945515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.945665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.945692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.945847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.945874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.946001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.946029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.946180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.946223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.946386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.946416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.946587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.946613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.946729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.946755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.946904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.946935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.947106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.947134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.947254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.947282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.947460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.947490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.947621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.947648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.947800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.947843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.948042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.948072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.948245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.948277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.948451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.948479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.948670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.948700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.948844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.948871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.948998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.949025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.949206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.949237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.949401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.949428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.949624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.949654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.949836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.949865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.950043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.950070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.950236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.950268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.950427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.950457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.950627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.950655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.950829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.950858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.951075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.951102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.951253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.951280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.951450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.951478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.951678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.951708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.951859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.951894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.952045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.952071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.952247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.952276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.952469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.952496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.952623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.952650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.952800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.952827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.952990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.953018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.953212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.953242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.953404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.953433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.953601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.953629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.953800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.953830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.954002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.954032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.954194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.954221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.954386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.954417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.954609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.954639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.954803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.954846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.955024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.955052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.955225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.955254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.955425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.955452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.955604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.955631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.955797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.955827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.956028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.956055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.956223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.956256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.956448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.956478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.956643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.956670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.956848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.956899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.957056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.957085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.957255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.957281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.957470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.957499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.957667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.384 [2024-07-15 10:06:17.957694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.384 qpair failed and we were unable to recover it. 00:33:01.384 [2024-07-15 10:06:17.957845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.957873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.958027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.958056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.958242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.958272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.958467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.958494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.958622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.958651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.958837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.958866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.959025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.959052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.959199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.959242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.959409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.959439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.959603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.959629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.959751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.959778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.959927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.959955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.960079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.960105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.960275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.960302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.960462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.960492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.960657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.960683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.960872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.960908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.961080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.961110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.961312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.961339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.961515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.961542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.961689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.961731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.961872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.961903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.962096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.962126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.962312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.962342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.962503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.962530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.962691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.962721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.962901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.962932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.963082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.963108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.963272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.963299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.963465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.963495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.963642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.963668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.963843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.963870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.964055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.964090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.964236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.964264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.964463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.964492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.964680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.964709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.964884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.964912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.965077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.965106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.965283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.965310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.965456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.965483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.965608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.965634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.965816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.965846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.966029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.966057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.966230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.966259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.966383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.966413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.966606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.966633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.966773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.966803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.966984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.967011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.967164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.967192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.967358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.967388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.967552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.967582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.967753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.967780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.967894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.967940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.968132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.968161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.968310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.968337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.968461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.968488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.968653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.968684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.968852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.968884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.969046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.969075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.969229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.969259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.969422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.969449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.969627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.969657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.969812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.969841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.970041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.970069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.970180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.970224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.970362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.970392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.970586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.970613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.970788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.970818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.970983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.971014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.971177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.971204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.971319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.971361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.385 [2024-07-15 10:06:17.971523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.385 [2024-07-15 10:06:17.971553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.385 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.971742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.971776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.971942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.971969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.972079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.972106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.972280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.972307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.972493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.972523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.972688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.972716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.972855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.972889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.973018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.973063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.973227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.973256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.973447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.973473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.973664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.973694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.973855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.973891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.974042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.974070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.974221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.974263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.974447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.974475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.974652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.974679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.974845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.974889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.975033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.975062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.975225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.975251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.975402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.975430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.975573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.975601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.975725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.975752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.975920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.975952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.976087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.976117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.976310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.976337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.976501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.976531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.976684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.976713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.976885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.976913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.977109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.977139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.977302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.977333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.977506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.977533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.977687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.977713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.977856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.977906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.978051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.978078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.978195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.978222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.978368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.978397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.978580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.978609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.978747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.978777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.978953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.978981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.979127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.979153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.979317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.979351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.979481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.979511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.979698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.979724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.979891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.979921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.980083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.980112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.980245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.980272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.980424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.980450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.980639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.980668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.980831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.980858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.980995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.981022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.981172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.981198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.981320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.981347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.981511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.981541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.981692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.981721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.981892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.981920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.982083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.982113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.982269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.982299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.982471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.982497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.982625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.982651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.982842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.982872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.983053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.983080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.983254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.983284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.983445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.983475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.983642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.983669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.983817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.983844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.983998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.984026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.984142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.984170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.984360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.984390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.984555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.984587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.984779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.984806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.984973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.985003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.985189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.985219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.985388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.985415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.985577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.985607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.985771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.985801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.985996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.986023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.386 qpair failed and we were unable to recover it. 00:33:01.386 [2024-07-15 10:06:17.986185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.386 [2024-07-15 10:06:17.986214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.986356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.986386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.986580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.986606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.986774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.986805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.986993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.987030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.987196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.987223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.987389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.987418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.987582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.987612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.987780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.987808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.988002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.988033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.988199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.988230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.988430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.988457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.988597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.988627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.988784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.988814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.988977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.989006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.989127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.989173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.989360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.989390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.989557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.989584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.989775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.989804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.989945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.989972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.990097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.990125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.990268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.990312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.990463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.990493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.990687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.990715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.990893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.990924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.991060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.991090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.991259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.991288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.991462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.991492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.991676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.991706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.991888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.991915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.992067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.992094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.992264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.992294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.992446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.992473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.992620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.992663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.992829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.992859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.993015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.993043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.993187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.993232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.993356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.993386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.993527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.993554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.993697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.993740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.993902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.993933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.994097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.994124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.994286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.994317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.994470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.994500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.994646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.994677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.994828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.994872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.995037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.995067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.995237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.995264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.995460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.995490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.995648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.995678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.995881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.995909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.996078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.996110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.996249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.996279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.996451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.996477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.996605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.996632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.996798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.996824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.996969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.996995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.997113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.997156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.997327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.997357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.997498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.997524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.997638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.997665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.997809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.997835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.997989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.998016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.998184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.998214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.998342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.998371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.998505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.998532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.998704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.998730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.998853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.998895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.999057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.999084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.999259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.999285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.999457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.999486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.999662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.387 [2024-07-15 10:06:17.999689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.387 qpair failed and we were unable to recover it. 00:33:01.387 [2024-07-15 10:06:17.999813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:17.999840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:17.999997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.000023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.000198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.000225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.000388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.000418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.000570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.000600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.000760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.000787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.000934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.000978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.001137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.001166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.001340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.001367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.001567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.001596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.001731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.001760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.001939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.001966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.002087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.002137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.002326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.002355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.002523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.002550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.002664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.002707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.002868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.002905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.003081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.003108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.003253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.003280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.003475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.003504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.003701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.003728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.003893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.003923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.004052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.004081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.004248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.004275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.004423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.004469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.004605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.004635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.004824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.004854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.005061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.005088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.005259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.005288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.005456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.005483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.005676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.005706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.005866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.005905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.006077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.006105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.006302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.006331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.006525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.006554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.006726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.006753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.006943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.006974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.007127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.007156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.007357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.007383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.007560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.007590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.007750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.007780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.007973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.008001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.008175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.008205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.008343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.008373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.008547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.008574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.008725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.008752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.008924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.008955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.009094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.009120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.009310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.009340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.009498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.009528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.009724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.009754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.009962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.009990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.010116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.010162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.010334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.010362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.010531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.010561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.010699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.010728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.010872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.010907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.011031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.011074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.011242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.011272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.011439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.011466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.011629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.011659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.011820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.011850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.011999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.012028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.012219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.012249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.012385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.012415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.012586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.012613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.012812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.012841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.012990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.013019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.013190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.013218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.013410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.013440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.013600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.013630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.013776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.013802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.013957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.014001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.014181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.014208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.014377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.014404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.014593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.014623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.388 [2024-07-15 10:06:18.014752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.388 [2024-07-15 10:06:18.014780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.388 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.014953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.014981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.015141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.015172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.015345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.015375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.015543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.015569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.015695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.015740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.015908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.015939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.016079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.016106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.016255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.016283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.016452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.016482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.016651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.016679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.016811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.016838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.017049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.017081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.017259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.017286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.017455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.017485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.017639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.017668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.017802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.017829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.017993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.018021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.018142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.018168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.018350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.018376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.018498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.018525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.018646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.018673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.018825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.018853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.019035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.019064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.019251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.019281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.019431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.019457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.019634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.019660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.019840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.019870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.020056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.020083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.020198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.020242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.020460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.020486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.020663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.020691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.020825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.020856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.021016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.021047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.021217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.021244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.021409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.021440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.021600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.021630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.021768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.021794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.021986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.022016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.022180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.022210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.022377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.022405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.022519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.022545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.022723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.022753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.022936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.022968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.023121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.023148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.023272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.023298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.023447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.023474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.023633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.023662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.023802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.023831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.024013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.024041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.024235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.024266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.024429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.024458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.024605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.024631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.024803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.024830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.024977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.025007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.025184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.025213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.025361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.025387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.025570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.025600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.025774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.025801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.025976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.026003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.026142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.026171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.026341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.026368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.026564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.026594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.026752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.026782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.026952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.026980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.027143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.027172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.027334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.027364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.027559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.027587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.027762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.027792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.027963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.027990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.028170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.028197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.028392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.028422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.028609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.028639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.028809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.028836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.028977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.029005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.029171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.029200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.029346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.029376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.029523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.029571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.389 [2024-07-15 10:06:18.029733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.389 [2024-07-15 10:06:18.029763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.389 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.029961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.029990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.030157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.030187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.030359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.030387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.030537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.030563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.030684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.030732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.030977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.031007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.031181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.031207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.031353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.031380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.031556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.031585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.031777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.031804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.031974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.032004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.032158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.032188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.032336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.032364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.032532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.032558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.032760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.032790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.032959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.032987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.033153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.033182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.033354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.033383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.033537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.033564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.033711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.033737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.033891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.033921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.034085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.034112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.034280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.034310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.034469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.034498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.034670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.034697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.034844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.034871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.035027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.035073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.035241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.035268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.035419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.035462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.035619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.035648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.035820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.035847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.036009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.036036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.036181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.036210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.036383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.036409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.036575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.036605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.036792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.036821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.036994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.037021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.037167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.037194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.037368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.037394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.037567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.037593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.037713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.037739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.037915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.037960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.038104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.038131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.038281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.038307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.038453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.038484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.038658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.038685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.038854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.038890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.039047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.039077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.039247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.039275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.039396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.039422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.039582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.039613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.039813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.039840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.039996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.040024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.040152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.040197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.040369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.040396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.040547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.040574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.040686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.040713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.040833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.040859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.041038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.041068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.041230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.041261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.041433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.041460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.041621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.041652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.041846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.041885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.042061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.042087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.042201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.042243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.042409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.042438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.042645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.042671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.042820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.042846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.043046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.043075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.043245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.390 [2024-07-15 10:06:18.043272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.390 qpair failed and we were unable to recover it. 00:33:01.390 [2024-07-15 10:06:18.043469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.043498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.043654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.043684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.043852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.043886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.044019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.044046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.044193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.044220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.044348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.044375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.044544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.044574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.044761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.044791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.044990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.045018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.045186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.045215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.045384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.045411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.045580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.045607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.045806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.045836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.046005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.046035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.046202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.046233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.046354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.046400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.046535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.046564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.046757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.046785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.046919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.046950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.047137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.047164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.047314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.047341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.047461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.047507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.047694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.047724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.047894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.047921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.048076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.048102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.048269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.048299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.048468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.048496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.048654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.048685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.048903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.048934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.049076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.049104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.049219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.049246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.049437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.049466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.049639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.049666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.049821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.049851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.050018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.050049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.050240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.050267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.050431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.050461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.050595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.050625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.050763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.050790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.050935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.050979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.051166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.051196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.051359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.051387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.051580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.051610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.051743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.051773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.051943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.051972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.052097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.052142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.052329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.052359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.052556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.052583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.052726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.052756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.052913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.052943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.053143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.053170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.053299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.053329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.053470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.053501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.053665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.053692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.053813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.053861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.054065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.054093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.054209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.054237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.054383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.054427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.054620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.054648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.054794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.054823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.054972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.055000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.055121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.055163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.055313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.055342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.055494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.055521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.055711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.055738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.055887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.055915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.056063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.056089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.056237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.056264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.056421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.056448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.056640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.056669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.056807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.056837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.057022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.057050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.057175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.057203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.057348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.057375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.057580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.057607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.057755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.057781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.057928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.057956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.391 qpair failed and we were unable to recover it. 00:33:01.391 [2024-07-15 10:06:18.058126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.391 [2024-07-15 10:06:18.058153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.058323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.058353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.058518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.058547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.058716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.058743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.058937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.058967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.059127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.059157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.059328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.059356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.059547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.059576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.059730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.059759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.059894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.059922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.060074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.060100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.060216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.060243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.060367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.060394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.060565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.060594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.060778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.060809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.060984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.061012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.061209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.061238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.061379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.061413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.061577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.061604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.061795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.061825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.061987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.062019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.062171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.062198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.062348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.062374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.062551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.062581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.062744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.062774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.062911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.062954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.063128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.063170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.063364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.063390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.063545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.063572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.063715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.063743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.063867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.063901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.064077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.064108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.064245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.064275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.064481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.064508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.064678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.064708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.064867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.064908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.065070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.065098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.065248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.065276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.065482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.065512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.065704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.065731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.065868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.065916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.066108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.066139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.392 [2024-07-15 10:06:18.066309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.392 [2024-07-15 10:06:18.066336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.392 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.066513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.066542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.066683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.066713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.066913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.066942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.067086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.067116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.067278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.067305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.067454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.067480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.067608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.067656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.067858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.067892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.068063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.068090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.068260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.068289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.068450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.068481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.068623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.068650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.068839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.068867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.069012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.069042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.069196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.069228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.069351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.069393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.069552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.069581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.069728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.069755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.069907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.069935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.070051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.070078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.070196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.070223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.070371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.070398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.070510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.070537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.070680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.070706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.070902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.070932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.071094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.071124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.071268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.071295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.071484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.071515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.071649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.071679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.071845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.071872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.072007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.072034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.072187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.072213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.072369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.072397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.072543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.072570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.072694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.072721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.072882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.072909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.073053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.073097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.073282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.073311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.073481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.073508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.073658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.073688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.073850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.073894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.074042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.074070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.074190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.074216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.074379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.074410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.074588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.074616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.074801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.074831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.074998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.075029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.075223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.075250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.075426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.075456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.075624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.075654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.075850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.075886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.076027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.076057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.076220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.076250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.076425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.076453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.076647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.076680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.076848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.076886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.077080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.077107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.077270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.077300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.077433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.077463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.077637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.077663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.077827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.077857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.078046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.078073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.078221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.078249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.078442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.078472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.078665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.078694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.078838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.078866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.079045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.079075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.079233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.079262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.079434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.079462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.079655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.079685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.079844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.079874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.080083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.080111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.080276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.080306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.080467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.080496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.080645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.080672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.080822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.080866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.081065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.081094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.393 [2024-07-15 10:06:18.081243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.393 [2024-07-15 10:06:18.081270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.393 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.081452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.081496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.081633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.081663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.081798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.081825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.081988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.082016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.082169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.082196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.082343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.082369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.082569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.082599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.082759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.082789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.082954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.082982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.083146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.083175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.083366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.083393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.083543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.083570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.083768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.083798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.083961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.083992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.084167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.084193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.084341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.084368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.084563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.084597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.084762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.084789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.084949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.084979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.085144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.085173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.085317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.085345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.085492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.085519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.085698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.085727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.085897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.085926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.086120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.086149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.086337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.086367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.086566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.086594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.086760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.086789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.086924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.086955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.087095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.087123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.087323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.087354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.087492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.087521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.087707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.087733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.087874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.087914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.088075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.088105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.088279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.088305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.088454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.088482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.088642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.088670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.088820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.088847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.089025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.089052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.089196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.089226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.089420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.089447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.089639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.089670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.089838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.089867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.090027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.090053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.090207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.090234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.090412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.090442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.090589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.090616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.090811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.090841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.091051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.091078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.091253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.091280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.091422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.091452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.091646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.091675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.091867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.091902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.092074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.092104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.092290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.092320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.092513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.092544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.092718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.092748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.092959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.092986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.093134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.093161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.093282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.093328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.093495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.093525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.093716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.093742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.093907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.093937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.094067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.094097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.094259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.094287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.094409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.094451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.094612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.094642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.094814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.094841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.095032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.095060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.095230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.394 [2024-07-15 10:06:18.095259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.394 qpair failed and we were unable to recover it. 00:33:01.394 [2024-07-15 10:06:18.095435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.095463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.095593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.095620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.095748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.095775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.095935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.095963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.096097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.096126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.096285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.096314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.096515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.096543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.096738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.096768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.096927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.096957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.097109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.097136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.097288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.097332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.097460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.097489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.097686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.097712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.097887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.097917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.098061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.098091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.098225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.098252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.098399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.098425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.098569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.098599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.098761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.098787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.098949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.098979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.099145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.099175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.099346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.099373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.099524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.099550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.099723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.099753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.099902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.099930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.100083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.100115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.100241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.100268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.100415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.100442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.100589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.100620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.100792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.100821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.100973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.101000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.101169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.101214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.101343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.101372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.101518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.101544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.101689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.101717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.101886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.101916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.102087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.102114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.102269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.102296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.102470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.102498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.102695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.102722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.102837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.102898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.103060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.103089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.103255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.103281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.103451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.103477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.103622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.103665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.103806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.103833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.103968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.103996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.104148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.104175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.104324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.104351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.104548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.104578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.104731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.104761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.104926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.104954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.105126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.105156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.105292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.105319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.105469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.105496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.105618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.105646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.105825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.105856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.106033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.106060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.106263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.106293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.106488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.106517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.106712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.106739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.106908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.106938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.107103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.107134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.107327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.107354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.107512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.107541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.107704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.107739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.107890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.107918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.108108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.108138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.108293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.108324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.108499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.108527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.108694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.108724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.108890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.108921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.109087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.109115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.395 [2024-07-15 10:06:18.109282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.395 [2024-07-15 10:06:18.109313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.395 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.109473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.109503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.109696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.109723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.109858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.109896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.110059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.110089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.110281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.110308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.110475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.110505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.110664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.110694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.110856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.110903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.111047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.111074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.111245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.111275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.111443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.111470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.111636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.111667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.111856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.111896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.112089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.112116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.112253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.112283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.112441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.112471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.112611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.112638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.112787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.112830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.113041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.113069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.113214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.113241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.113395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.113425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.113563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.113592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.113732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.113759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.113908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.113951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.114142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.114172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.114314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.114341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.114510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.114540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.114708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.114737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.114901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.114929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.115119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.115149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.115338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.115368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.115561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.115593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.115778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.115808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.115967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.115998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.116172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.116199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.116321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.116365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.116559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.116589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.116787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.116814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.116962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.117007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.117144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.117174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.117344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.117371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.117525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.117552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.117729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.117759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.117960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.117987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.118132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.118158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.118312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.118355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.118516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.118543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.118732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.118762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.118921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.118951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.119146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.119173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.119305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.119335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.119533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.119560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.119737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.119763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.119907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.119938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.120069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.120099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.120238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.120264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.120436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.120462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.120633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.120664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.120808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.120841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.121069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.121101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.121296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.121323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.121446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.121473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.121624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.121671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.121805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.121834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.122016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.122043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.122194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.122221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.122371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.122398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.122545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.122572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.122737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.122766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.122935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.122966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.123132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.123160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.123307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.123350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.123518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.123547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.123716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.123742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.123910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.123941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.124132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.124161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.124292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.396 [2024-07-15 10:06:18.124319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.396 qpair failed and we were unable to recover it. 00:33:01.396 [2024-07-15 10:06:18.124441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.124468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.124672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.124702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.124848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.124874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.125048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.125076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.125237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.125267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.125433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.125460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.125613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.125640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.125789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.125816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.125947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.125975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.126094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.126136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.126327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.126355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.126523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.126550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.126701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.126728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.126909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.126939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.127085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.127111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.127263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.127308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.127508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.127535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.127714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.127741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.127914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.127944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.128135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.128164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.128336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.128363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.128509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.128557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.128695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.128724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.128860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.128900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.129051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.129096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.129232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.129262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.129437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.129464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.129609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.129653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.129806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.129836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.130032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.130060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.130253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.130282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.130470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.130499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.130664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.130691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.130815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.130843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.130981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.131008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.131167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.131194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.131343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.131370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.131509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.131539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.131704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.131731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.131921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.131951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.132109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.132138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.132306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.132333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.132456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.132500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.132671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.132701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.132863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.132897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.133027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.133073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.397 [2024-07-15 10:06:18.133205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.397 [2024-07-15 10:06:18.133234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.397 qpair failed and we were unable to recover it. 00:33:01.398 [2024-07-15 10:06:18.133395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.398 [2024-07-15 10:06:18.133421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.398 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.133613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.133644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.133801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.133832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.134001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.134029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.134182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.134210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.134383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.134411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.134568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.134595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.134785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.134815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.134981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.135008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.135177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.135203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.135362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.135392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.135579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.135609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.135781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.135807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.135979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.136009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.136197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.136231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.136384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.136411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.136561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.136588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.136795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.136821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.136945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.136974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.137143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.137174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.137327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.137356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.137529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.137556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.137730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.137758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.137905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.137936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.138103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.138129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.138254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.138280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.138425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.138452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.138628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.138655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.138821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.138850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.139005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.139036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.139182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.139210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.139360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.139386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.139535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.139562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.139712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.139738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.139901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.139932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.140104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.140134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.140282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.140310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.140464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.140508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.691 [2024-07-15 10:06:18.140666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.691 [2024-07-15 10:06:18.140695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.691 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.140889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.140917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.141070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.141096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.141227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.141254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.141425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.141451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.141642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.141671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.141826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.141856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.142038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.142065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.142226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.142255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.142451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.142477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.142621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.142647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.142768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.142813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.143003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.143034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.143180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.143208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.143360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.143406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.143589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.143619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.143765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.143796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.143948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.143975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.144116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.144146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.144351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.144378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.144567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.144598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.144723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.144751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.144899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.144927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.145052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.145079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.145279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.145310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.145475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.145502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.145697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.145726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.145964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.145994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.146166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.146192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.146305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.146348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.146516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.146546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.146715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.146742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.146933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.146964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.147128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.147158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.147303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.147330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.147478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.147520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.147672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.147701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.147861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.147902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.148034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.148061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.148187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.148214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.148366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.692 [2024-07-15 10:06:18.148393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.692 qpair failed and we were unable to recover it. 00:33:01.692 [2024-07-15 10:06:18.148590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.148620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.148747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.148777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.148927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.148954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.149148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.149178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.149310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.149340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.149509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.149536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.149689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.149716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.149858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.149891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.150039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.150066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.150229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.150258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.150423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.150452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.150620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.150647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.150815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.150845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.151007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.151034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.151183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.151210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.151398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.151429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.151606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.151636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.151814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.151840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.152027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.152058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.152194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.152225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.152397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.152425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.152600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.152627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.152769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.152799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.153003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.153031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.153170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.153200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.153333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.153362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.153506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.153532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.153681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.153725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.153857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.153958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.154116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.154143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.154332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.154362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.154517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.154547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.154720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.154746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.154938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.154968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.155157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.155186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.155338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.155365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.155522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.155564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.155695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.155724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.155895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.155922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.156129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.156159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.693 qpair failed and we were unable to recover it. 00:33:01.693 [2024-07-15 10:06:18.156319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.693 [2024-07-15 10:06:18.156349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.156496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.156524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.156675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.156722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.156894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.156926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.157125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.157153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.157281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.157311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.157452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.157481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.157644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.157672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.157835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.157865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.158041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.158068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.158216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.158242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.158434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.158464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.158660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.158690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.158863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.158912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.159106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.159137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.159299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.159333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.159507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.159534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.159685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.159729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.159898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.159927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.160112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.160139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.160308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.160337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.160487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.160516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.160684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.160711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.160830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.160874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.161044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.161073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.161233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.161260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.161424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.161454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.161624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.161653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.161854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.161887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.162078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.162108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.162246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.162276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.162438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.162465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.162657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.162687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.162846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.162897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.163070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.163097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.163286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.163315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.163453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.163484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.163652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.163680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.163807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.163835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.163991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.164018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.164168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.694 [2024-07-15 10:06:18.164196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.694 qpair failed and we were unable to recover it. 00:33:01.694 [2024-07-15 10:06:18.164319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.164345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.164479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.164506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.164745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.164772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.164943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.164974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.165149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.165179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.165347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.165375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.165542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.165571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.165732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.165764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.165942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.165971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.166132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.166160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.166322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.166352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.166503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.166529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.166676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.166704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.166875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.166911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.167105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.167136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.167269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.167300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.167466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.167496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.167664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.167690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.167852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.167888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.168051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.168081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.168252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.168279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.168447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.168478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.168638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.168668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.168861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.168897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.169027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.169055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.169170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.169197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.169337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.169364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.169532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.169561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.169751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.169781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.169927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.169955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.170111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.170138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.170287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.170313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.170460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.170487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.695 qpair failed and we were unable to recover it. 00:33:01.695 [2024-07-15 10:06:18.170628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.695 [2024-07-15 10:06:18.170657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.170791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.170819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.170998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.171026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.171147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.171192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.171363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.171392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.171534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.171561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.171707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.171751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.171905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.171949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.172103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.172130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.172270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.172300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.172498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.172529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.172702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.172729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.172900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.172931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.173124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.173154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.173298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.173325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.173482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.173512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.173669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.173699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.173867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.173901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.174107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.174137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.174297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.174327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.174472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.174499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.174639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.174685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.174851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.174894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.175063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.175090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.175200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.175227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.175402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.175429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.175603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.175630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.175779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.175805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.175931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.175958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.176143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.176170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.176317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.176347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.176520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.176549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.176722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.176750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.176917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.176948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.177112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.177142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.177284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.177310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.177436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.177464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.177595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.177624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.177761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.177787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.177938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.177966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.178117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.696 [2024-07-15 10:06:18.178143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.696 qpair failed and we were unable to recover it. 00:33:01.696 [2024-07-15 10:06:18.178290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.178316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.178508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.178538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.178699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.178729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.178903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.178930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.179098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.179128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.179269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.179299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.179465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.179492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.179663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.179692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.179891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.179921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.180087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.180114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.180234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.180277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.180432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.180460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.180611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.180638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.180762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.180809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.180971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.181001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.181172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.181199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.181367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.181397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.181588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.181618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.181787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.181814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.181982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.182013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.182152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.182187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.182363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.182391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.182531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.182559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.182705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.182736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.182899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.182927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.183103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.183133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.183294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.183323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.183469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.183497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.183644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.183671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.183807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.183837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.184040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.184068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.184226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.184257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.184422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.184449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.184596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.184624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.184773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.184804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.184963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.184994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.185161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.185188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.185377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.185407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.185591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.185621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.185780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.185807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.697 [2024-07-15 10:06:18.185949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.697 [2024-07-15 10:06:18.185995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.697 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.186172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.186199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.186345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.186372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.186539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.186571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.186739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.186765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.186916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.186944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.187136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.187166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.187329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.187360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.187506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.187534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.187726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.187756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.187943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.187974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.188114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.188141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.188296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.188341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.188498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.188528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.188697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.188725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.188918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.188948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.189111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.189142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.189280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.189307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.189479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.189522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.189678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.189708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.189873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.189911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.190080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.190111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.190303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.190332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.190503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.190530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.190724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.190754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.190919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.190949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.191101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.191129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.191278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.191306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.191484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.191511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.191659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.191686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.191859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.191897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.192092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.192121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.192320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.192347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.192489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.192520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.192685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.192714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.192885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.192912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.193094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.193124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.193281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.193311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.193505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.193532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.193694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.193723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.193890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.193920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.698 qpair failed and we were unable to recover it. 00:33:01.698 [2024-07-15 10:06:18.194092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.698 [2024-07-15 10:06:18.194120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.194293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.194320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.194521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.194550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.194692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.194718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.194870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.194922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.195087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.195117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.195325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.195352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.195515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.195545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.195684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.195715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.195851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.195887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.196038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.196081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.196217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.196249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.196419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.196445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.196614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.196644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.196781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.196811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.197007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.197034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.197203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.197232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.197395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.197425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.197617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.197643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.197833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.197867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.198021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.198047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.198226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.198252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.198398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.198427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.198589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.198619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.198794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.198822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.198969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.198997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.199123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.199167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.199344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.199371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.199525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.199554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.199709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.199738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.199938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.199965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.200133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.200164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.200355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.200385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.200562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.200588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.200704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.200747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.699 [2024-07-15 10:06:18.200872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.699 [2024-07-15 10:06:18.200911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.699 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.201080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.201107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.201300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.201329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.201488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.201518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.201684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.201711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.201863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.201899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.202027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.202069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.202239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.202266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.202424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.202453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.202640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.202669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.202829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.202855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.203037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.203067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.203233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.203263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.203432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.203459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.203624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.203653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.203815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.203844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.204029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.204057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.204196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.204227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.204395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.204422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.204573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.204599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.204713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.204740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.204883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.204913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.205036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.205062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.205187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.205214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.205355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.205385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.205558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.205586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.205780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.205810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.205971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.206002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.206143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.206169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.206333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.206361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.206525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.206556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.206754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.206782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.206942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.206972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.207135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.207164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.207359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.207387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.207565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.207596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.207783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.207813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.207984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.208011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.208213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.208242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.208377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.208406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.208579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.700 [2024-07-15 10:06:18.208606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.700 qpair failed and we were unable to recover it. 00:33:01.700 [2024-07-15 10:06:18.208798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.208828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.209040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.209068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.209250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.209276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.209450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.209479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.209638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.209668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.209836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.209863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.209998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.210042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.210220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.210248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.210431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.210457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.210604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.210631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.210782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.210827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.211040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.211068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.211237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.211267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.211398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.211428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.211573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.211601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.211753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.211797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.211988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.212018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.212186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.212212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.212339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.212367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.212566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.212596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.212770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.212797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.212967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.212994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.213157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.213187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.213382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.213413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.213584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.213614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.213739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.213768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.213919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.213946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.214076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.214102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.214276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.214305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.214449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.214475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.214621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.214650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.214836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.214863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.214998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.215024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.215189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.215219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.215378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.215408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.215583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.215610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.215800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.215829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.216005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.216035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.216232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.216259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.216457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.216487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.216648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.701 [2024-07-15 10:06:18.216678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.701 qpair failed and we were unable to recover it. 00:33:01.701 [2024-07-15 10:06:18.216820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.216846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.217023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.217051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.217256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.217285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.217452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.217479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.217596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.217638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.217789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.217819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.217981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.218009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.218153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.218184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.218367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.218396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.218544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.218577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.218724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.218751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.218933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.218965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.219115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.219141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.219316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.219343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.219549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.219579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.219743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.219774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.219970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.219997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.220120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.220164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.220338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.220364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.220513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.220539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.220667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.220693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.220844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.220871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.221054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.221084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.221275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.221305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.221476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.221503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.221692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.221720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.221926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.221954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.222072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.222099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.222282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.222310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.222487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.222517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.222664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.222691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.222861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.222895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.223070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.223101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.223243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.223271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.223421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.223464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.223630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.223660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.223831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.223858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.223987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.224030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.224189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.224219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.702 [2024-07-15 10:06:18.224389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.702 [2024-07-15 10:06:18.224417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.702 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.224611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.224641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.224825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.224855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.225044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.225073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.225241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.225272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.225408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.225438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.225633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.225660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.225795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.225825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.225992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.226023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.226190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.226218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.226387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.226418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.226565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.226609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.226779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.226807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.227002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.227033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.227166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.227196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.227363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.227390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.227584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.227614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.227819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.227846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.227995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.228023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.228146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.228173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.228346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.228391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.228589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.228616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.228742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.228772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.228909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.228940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.229088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.229116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.229261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.229305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.229495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.229525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.229700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.229727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.229847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.229906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.230042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.230072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.230236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.230263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.230456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.230487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.230681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.230708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.230823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.230851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.231041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.231069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.231240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.703 [2024-07-15 10:06:18.231270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.703 qpair failed and we were unable to recover it. 00:33:01.703 [2024-07-15 10:06:18.231430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.231458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.231624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.231654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.231821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.231851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.232028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.232056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.232221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.232251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.232412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.232442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.232575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.232601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.232747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.232774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.232976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.233007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.233175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.233202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.233357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.233387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.233518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.233548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.233746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.233773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.233950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.233978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.234106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.234137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.234287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.234315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.234508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.234538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.234726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.234756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.234926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.234955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.235120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.235151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.235304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.235334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.235510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.235538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.235710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.235737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.235931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.235962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.236121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.236149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.236313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.236345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.236533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.236562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.236728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.236754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.236889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.236935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.237121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.237150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.237320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.237346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.237509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.237538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.237704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.237734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.237903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.237931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.238080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.238106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.238264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.238291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.238442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.238468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.238591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.238618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.238770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.238796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.238945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.238974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.704 [2024-07-15 10:06:18.239145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.704 [2024-07-15 10:06:18.239176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.704 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.239344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.239374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.239544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.239570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.239735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.239765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.239937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.239968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.240137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.240164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.240359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.240389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.240518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.240547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.240721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.240749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.240911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.240943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.241082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.241112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.241288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.241315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.241441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.241469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.241616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.241642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.241764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.241796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.241974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.242005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.242161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.242190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.242334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.242360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.242532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.242560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.242732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.242761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.242931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.242958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.243154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.243184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.243346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.243376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.243522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.243548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.243697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.243740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.243931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.243958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.244106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.244134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.244278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.244306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.244473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.244503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.244702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.244730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.244902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.244933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.245126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.245153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.245322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.245350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.245542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.245572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.245730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.245760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.245933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.245962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.246160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.246190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.246360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.246390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.246532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.246559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.246685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.246713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.246899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-07-15 10:06:18.246930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.705 qpair failed and we were unable to recover it. 00:33:01.705 [2024-07-15 10:06:18.247080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.247108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.247255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.247298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.247460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.247489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.247658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.247685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.247804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.247848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.248023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.248054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.248216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.248243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.248403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.248433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.248592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.248621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.248796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.248823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.248997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.249024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.249219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.249249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.249424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.249450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.249642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.249675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.249857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.249897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.250046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.250075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.250266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.250296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.250483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.250513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.250679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.250705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.250858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.250893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.251091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.251121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.251297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.251324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.251450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.251476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.251665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.251695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.251902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.251930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.252052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.252080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.252228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.252254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.252408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.252434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.252592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.252622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.252771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.252801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.252948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.252974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.253125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.253154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.253335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.253365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.253541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.253568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.253692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.253738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.253883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.253913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.254088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.254114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.254260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.254286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.254487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.254517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.254686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.254713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.706 [2024-07-15 10:06:18.254841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-07-15 10:06:18.254868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.706 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.255043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.255069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.255242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.255269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.255440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.255469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.255628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.255657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.255805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.255830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.255989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.256018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.256148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.256174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.256345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.256371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.256533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.256563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.256736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.256763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.256892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.256922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.257093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.257119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.257325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.257357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.257480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.257507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.257679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.257721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.257907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.257953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.258072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.258100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.258244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.258288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.258425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.258455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.258593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.258622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.258779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.258806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.258934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.258960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.259084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.259110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.259229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.259255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.259437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.259466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.259618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.259646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.259809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.259836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.259995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.260021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.260147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.260174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.260321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.260366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.260578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.260618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.260780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.260808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.260958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.260985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.261157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.261187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.261336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.261362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.261523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.261550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.261721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.261751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.261912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.261940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.262070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.262096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.262273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-07-15 10:06:18.262304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.707 qpair failed and we were unable to recover it. 00:33:01.707 [2024-07-15 10:06:18.262470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.262498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.262630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.262673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.262809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.262839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.263020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.263048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.263182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.263224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.263360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.263389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.263536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.263563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.263739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.263766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.263938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.263968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.264107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.264134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.264288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.264332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.264466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.264496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.264692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.264722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.264930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.264961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.265122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.265157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.265323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.265351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.265502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.265546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.265707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.265736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.265915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.265950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.266123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.266176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.266371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.266400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.266575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.266602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.266768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.266796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.266958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.266985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.267109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.267134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.267284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.267328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.267484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.267513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.267674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.267701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.267849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.267903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.268053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.268080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.268259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.268285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.268449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.268479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.268664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.268693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.268888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.268914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.269062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.269090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.708 [2024-07-15 10:06:18.269255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.708 [2024-07-15 10:06:18.269285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.708 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.269456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.269482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.269632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.269675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.269832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.269860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.270017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.270044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.270158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.270184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.270326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.270356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.270562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.270589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.270720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.270749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.270917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.270946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.271089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.271115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.271275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.271319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.271454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.271484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.271680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.271706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.271866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.271934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.272066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.272092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.272239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.272265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.272431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.272464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.272598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.272628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.272773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.272800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.272924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.272951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.273109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.273139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.273301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.273328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.273456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.273482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.273606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.273633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.273780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.273806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.273924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.273951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.274127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.274155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.274332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.274358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.274539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.274568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.274705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.274734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.274900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.274939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.275102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.275130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.275322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.275352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.275522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.275548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.275714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.275744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.275918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.275945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.276072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.276098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.276224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.276252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.276441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.276472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.276636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.709 [2024-07-15 10:06:18.276663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.709 qpair failed and we were unable to recover it. 00:33:01.709 [2024-07-15 10:06:18.276792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.276818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.276953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.276980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.277132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.277159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.277324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.277355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.277516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.277546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.277697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.277723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.277917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.277947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.278082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.278112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.278308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.278335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.278466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.278496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.278629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.278660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.278831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.278859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.279008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.279037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.279170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.279200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.279374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.279402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.279552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.279579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.279715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.279748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.279924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.279952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.280099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.280128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.280304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.280335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.280478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.280506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.280652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.280695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.280860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.280898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.281101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.281127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.281280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.281307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.281462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.281490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.281634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.281661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.281836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.281864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.282064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.282094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.282261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.282288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.282477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.282507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.282664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.282694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.282858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.282891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.283064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.283093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.283267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.283294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.283439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.283465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.283613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.283657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.283830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.283860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.284050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.284077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.284196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.284238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.710 [2024-07-15 10:06:18.284366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.710 [2024-07-15 10:06:18.284396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.710 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.284591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.284618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.284744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.284771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.284927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.284954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.285106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.285133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.285279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.285324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.285463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.285492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.285664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.285691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.285888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.285919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.286048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.286077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.286282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.286309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.286476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.286506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.286670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.286700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.286873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.286907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.287097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.287126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.287320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.287347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.287491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.287523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.287669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.287697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.287889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.287916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.288061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.288087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.288234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.288261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.288451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.288481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.288653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.288681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.288834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.288860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.289049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.289078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.289247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.289274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.289437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.289467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.289655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.289685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.289829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.289855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.290023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.290050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.290262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.290292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.290458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.290485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.290676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.290705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.290868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.290909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.291082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.291109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.291268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.291295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.291485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.291515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.291680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.291706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.291831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.291859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.292032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.292059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.292237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.292264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.711 qpair failed and we were unable to recover it. 00:33:01.711 [2024-07-15 10:06:18.292435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.711 [2024-07-15 10:06:18.292464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.292627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.292657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.292799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.292826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.292956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.292983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.293104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.293130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.293321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.293347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.293513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.293542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.293729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.293758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.293929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.293956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.294086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.294113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.294306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.294336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.294542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.294568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.294699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.294728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.294902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.294938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.295089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.295115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.295294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.295331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.295529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.295556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.295730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.295757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.295948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.295978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.296144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.296185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.296333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.296359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.296510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.296537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.296734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.296764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.296932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.296959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.297123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.297156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.297318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.297347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.297505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.297532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.297653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.297680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.297870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.297907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.298091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.298117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.298254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.298281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.298455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.298501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.298645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.298672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.298811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.298853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.299044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.712 [2024-07-15 10:06:18.299089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.712 qpair failed and we were unable to recover it. 00:33:01.712 [2024-07-15 10:06:18.299246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.299275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.299459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.299486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.299722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.299780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.299984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.300012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.300151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.300181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.300341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.300372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.300541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.300568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.300693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.300739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.300901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.300932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.301103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.301131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.301298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.301329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.301593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.301645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.301824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.301852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.302000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.302028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.302172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.302201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.302346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.302372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.302488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.302516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.302692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.302719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.302843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.302871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.303030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.303058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.303228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.303264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.303405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.303433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.303575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.303603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.303780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.303810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.303947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.303974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.304128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.304174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.304333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.304364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.304540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.304569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.304736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.304766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.304930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.304961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.305129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.305156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.305350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.305380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.305596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.305626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.305795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.305823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.306002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.306031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.306168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.306199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.306370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.306398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.306560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.306591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.306747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.306777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.306950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.306978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.713 [2024-07-15 10:06:18.307125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.713 [2024-07-15 10:06:18.307152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.713 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.307365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.307392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.307540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.307567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.307714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.307741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.307887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.307915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.308065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.308093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.308234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.308265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.308498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.308558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.308758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.308785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.308953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.308984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.309171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.309201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.309371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.309398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.309564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.309594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.309757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.309786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.309973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.310001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.310171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.310201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.310376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.310403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.310578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.310605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.310800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.310831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.311012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.311045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.311234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.311266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.311462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.311493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.311781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.311832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.312035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.312064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.312255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.312286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.312538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.312592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.312786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.312814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.312954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.312986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.313165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.313194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.313345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.313373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.313537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.313567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.313728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.313759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.313923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.313952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.314117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.314148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.314332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.314379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.314572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.314600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.314794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.314825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.314999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.315032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.315228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.315256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.315393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.315424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.315584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.315615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.714 [2024-07-15 10:06:18.315790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.714 [2024-07-15 10:06:18.315819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.714 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.315986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.316016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.316192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.316219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.316401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.316428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.316556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.316583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.316702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.316730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.316872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.316911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.317075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.317106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.317309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.317336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.317513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.317540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.317702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.317732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.317896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.317928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.318067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.318095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.318222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.318250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.318387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.318418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.318611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.318638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.318828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.318858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.319035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.319065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.319209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.319236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.319391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.319418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.319536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.319563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.319737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.319765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.319936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.319967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.320168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.320195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.320308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.320335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.320481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.320526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.320679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.320708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.320899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.320926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.321093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.321122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.321296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.321323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.321469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.321498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.321690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.321720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.321850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.321894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.322072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.322100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.322215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.322243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.322390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.322417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.322565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.322593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.322790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.322820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.322940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.322972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.323143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.323170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.323360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.715 [2024-07-15 10:06:18.323390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.715 qpair failed and we were unable to recover it. 00:33:01.715 [2024-07-15 10:06:18.323626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.323677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.323842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.323870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.324007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.324035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.324157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.324184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.324359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.324386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.324551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.324586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.324725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.324770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.324949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.324978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.325106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.325135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.325284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.325312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.325499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.325527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.325680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.325708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.325886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.325918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.326067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.326095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.326236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.326264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.326506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.326559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.326721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.326748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.326898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.326944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.327106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.327136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.327338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.327365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.327521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.327551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.327715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.327745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.327940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.327968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.328103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.328133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.328271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.328302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.328492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.328519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.328681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.328711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.328863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.328925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.329136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.329166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.329333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.329364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.329551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.329582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.329754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.329782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.329982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.330014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.330180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.330210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.330349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.330377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.716 [2024-07-15 10:06:18.330569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.716 [2024-07-15 10:06:18.330599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:01.716 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.330767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.330800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.330990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.331018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.331217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.331248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.331403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.331452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.331621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.331649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.331799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.331826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.332050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.332078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.332255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.332282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.332449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.332480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.332742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.332799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.332943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.332972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.333124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.333151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.333292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.333319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.333469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.333496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.333668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.333698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.333856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.333894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.334095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.334123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.334280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.334310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.334570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.334621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.334798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.334825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.334975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.335003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.335172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.335202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.335396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.335424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.335562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.335592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.335732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.335762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.335961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.335990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.336158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.336188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.336456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.336509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.336705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.336732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.336898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.336928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.337127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.337154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.337303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.337330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.337496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.337526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.337714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.337745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.337939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.337967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.338134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.338163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.338355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.338403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.338600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.338626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.338792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.338821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.339015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.339043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.717 qpair failed and we were unable to recover it. 00:33:01.717 [2024-07-15 10:06:18.339197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.717 [2024-07-15 10:06:18.339224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.339386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.339416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.339548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.339577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.339777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.339805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.339971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.340003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.340191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.340221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.340367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.340394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.340541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.340568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.340710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.340737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.340853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.340892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.341061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.341091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.341296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.341323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.341441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.341468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.341617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.341645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.341837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.341867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.342074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.342102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.342230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.342257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.342401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.342428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.342568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.342595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.342752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.342782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.342945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.342976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.343141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.343168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.343318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.343362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.343567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.343595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.343740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.343767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.343955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.343986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.344121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.344152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.344318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.344345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.344506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.344536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.344700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.344730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.344903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.344931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.345083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.345110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.345263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.345290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.345411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.345437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.345587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.345614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.345788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.345818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.345997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.346025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.346142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.346170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.346454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.346505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.346671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.346698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.346888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.718 [2024-07-15 10:06:18.346918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.718 qpair failed and we were unable to recover it. 00:33:01.718 [2024-07-15 10:06:18.347082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.347113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.347277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.347305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.347500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.347530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.347729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.347756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.347925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.347954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.348125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.348155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.348310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.348341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.348498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.348526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.348676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.348707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.348871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.348909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.349085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.349113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.349259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.349286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.349434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.349461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.349644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.349672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.349866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.349903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.350048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.350078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.350251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.350279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.350483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.350513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.350655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.350685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.350823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.350851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.350981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.351009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.351128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.351172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.351372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.351399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.351564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.351594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.351763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.351790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.351936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.351964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.352107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.352150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.352339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.352368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.352562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.352590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.352782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.352812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.352940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.352970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.353105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.353133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.353322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.353352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.353542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.353571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.353735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.353763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.353894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.353922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.354093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.354120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.354238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.354265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.354432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.354462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.354620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.354650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.719 [2024-07-15 10:06:18.354823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.719 [2024-07-15 10:06:18.354850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.719 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.355009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.355036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.355192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.355220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.355365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.355393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.355551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.355581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.355724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.355754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.355923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.355951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.356116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.356147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.356299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.356334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.356533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.356560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.356728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.356758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.356949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.356980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.357117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.357144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.357296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.357324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.357515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.357545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.357736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.357763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.357915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.357943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.358137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.358167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.358312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.358339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.358476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.358520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.358692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.358719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.358865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.358899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.359073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.359103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.359263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.359294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.359447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.359475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.359656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.359683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.359899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.359930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.360076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.360103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.360248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.360292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.360454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.360484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.360680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.360707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.360868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.360909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.361068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.361099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.361263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.361290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.361436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.361482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.361618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.361648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.361821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.361849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.361977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.362005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.362125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.362168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.362336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.362363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.362480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.362523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.720 qpair failed and we were unable to recover it. 00:33:01.720 [2024-07-15 10:06:18.362712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.720 [2024-07-15 10:06:18.362739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.362910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.362937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.363085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.363115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.363272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.363302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.363469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.363496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.363637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.363681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.363875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.363906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.364052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.364084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.364208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.364253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.364407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.364437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.364630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.364657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.364788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.364818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.364974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.365005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.365156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.365185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.365369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.365413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.365574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.365601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.365769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.365796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.365965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.365995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.366130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.366160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.366353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.366380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.366570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.366600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.366759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.366789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.366964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.366992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.367157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.367187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.367318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.367349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.367518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.367546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.367713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.367744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.367956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.367984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.368130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.368157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.368321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.368352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.368527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.368557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.368746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.368773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.368942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.368973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.369134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.369164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.369334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.369361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.721 qpair failed and we were unable to recover it. 00:33:01.721 [2024-07-15 10:06:18.369482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.721 [2024-07-15 10:06:18.369526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.369700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.369730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.369903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.369931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.370089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.370119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.370274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.370304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.370497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.370524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.370660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.370690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.370885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.370915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.371083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.371112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.371269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.371299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.371460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.371490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.371637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.371665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.371817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.371848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.372023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.372054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.372220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.372247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.372408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.372438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.372600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.372631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.372794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.372825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.372997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.373024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.373196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.373240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.373413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.373440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.373592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.373619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.373768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.373794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.373945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.373972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.374098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.374126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.374271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.374299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.374492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.374519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.374634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.374678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.374839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.374870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.375053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.375081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.375194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.375221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.375359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.375389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.375554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.375582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.375745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.375775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.375932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.375963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.376135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.376162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.376325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.376355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.376524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.376554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.376720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.376747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.376896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.376941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.722 qpair failed and we were unable to recover it. 00:33:01.722 [2024-07-15 10:06:18.377096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.722 [2024-07-15 10:06:18.377126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.377267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.377294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.377458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.377501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.377692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.377722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.377918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.377946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.378089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.378119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.378285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.378312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.378466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.378494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.378657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.378687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.378838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.378868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.379076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.379104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.379274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.379305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.379566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.379620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.379817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.379845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.379989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.380017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.380163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.380190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.380337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.380364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.380556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.380586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.380712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.380742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.380884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.380912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.381062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.381089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.381218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.381245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.381419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.381446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.381560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.381587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.381740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.381768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.381887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.381915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.382110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.382140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.382304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.382334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.382531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.382558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.382725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.382756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.382950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.382980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.383137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.383164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.383282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.383325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.383494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.383521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.383666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.383692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.383888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.383919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.384113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.384142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.384314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.384342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.384497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.384524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.384668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.384695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.384870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.723 [2024-07-15 10:06:18.384904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.723 qpair failed and we were unable to recover it. 00:33:01.723 [2024-07-15 10:06:18.385074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.385104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.385255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.385285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.385445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.385472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.385636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.385666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.385801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.385831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.385979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.386006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.386154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.386199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.386376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.386406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.386538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.386565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.386708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.386736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.386938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.386969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.387161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.387192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.387357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.387387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.387575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.387604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.387776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.387803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.387968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.387998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.388157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.388186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.388320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.388346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.388504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.388547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.388706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.388736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.388902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.388930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.389084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.389114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.389270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.389300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.389439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.389465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.389611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.389658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.389851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.389887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.390082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.390109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.390264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.390294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.390459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.390489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.390661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.390688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.390828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.390855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.391031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.391062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.391240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.391267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.391427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.391457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.391644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.391671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.391817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.391845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.392002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.392031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.392203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.392234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.392417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.392445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.392644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.392675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.724 [2024-07-15 10:06:18.392827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.724 [2024-07-15 10:06:18.392857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.724 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.393007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.393033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.393180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.393208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.393407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.393437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.393606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.393633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.393787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.393814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.393981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.394008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.394126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.394154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.394273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.394300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.394447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.394474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.394626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.394653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.394759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.394808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.394946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.394976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.395151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.395178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.395326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.395353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.395498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.395525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.395648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.395675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.395845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.395874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.396055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.396085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.396252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.396279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.396395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.396438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.396596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.396626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.396793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.396821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.397016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.397046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.397231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.397261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.397421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.397447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.397588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.397632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.397825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.397852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.397989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.398016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.398133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.398177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.398338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.398369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.398564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.398591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.398790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.398819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.399010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.399041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.399193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.399221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.399329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.725 [2024-07-15 10:06:18.399356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.725 qpair failed and we were unable to recover it. 00:33:01.725 [2024-07-15 10:06:18.399526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.399556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.399730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.399757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.399888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.399933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.400074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.400100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.400272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.400299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.400458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.400488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.400644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.400673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.400844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.400871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.401061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.401088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.401271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.401301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.401495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.401522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.401682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.401726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.401949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.401980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.402176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.402204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.402373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.402403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.402569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.402601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.402755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.402783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.402944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.402972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.403120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.403162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.403310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.403337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.403529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.403558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.403720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.403750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.403912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.403940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.404062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.404108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.404270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.404300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.404474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.404501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.404652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.404678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.404844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.404886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.405086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.405113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.405264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.405312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.405449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.405480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.405678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.405705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.405884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.405915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.406072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.406102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.406269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.406296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.406491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.406521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.406683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.406713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.406852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.406887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.407069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.407097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.407294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.407324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.726 [2024-07-15 10:06:18.407492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.726 [2024-07-15 10:06:18.407519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.726 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.407669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.407696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.407853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.407888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.408040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.408067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.408223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.408253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.408432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.408459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.408609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.408636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.408819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.408846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.409031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.409061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.409245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.409272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.409387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.409432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.409622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.409652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.409790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.409819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.409964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.410010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.410165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.410195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.410383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.410414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.410610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.410641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.410772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.410802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.410971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.410999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.411123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.411169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.411333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.411363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.411557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.411584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.411796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.411823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.411975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.412003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.412150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.412178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.412350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.412379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.412542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.412572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.412761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.412788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.412964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.412992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.413114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.413142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.413294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.413322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.413517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.413547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.413701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.413730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.413899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.413937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.414096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.414123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.414325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.414354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.414523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.414549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.414745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.414774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.414933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.414963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.415161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.415188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.415385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.727 [2024-07-15 10:06:18.415414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.727 qpair failed and we were unable to recover it. 00:33:01.727 [2024-07-15 10:06:18.415586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.415614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.415794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.415821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.416004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.416031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.416197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.416227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.416387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.416413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.416564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.416609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.416807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.416836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.417076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.417104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.417226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.417269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.417437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.417466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.417612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.417638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.417795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.417822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.417942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.417969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.418142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.418177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.418394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.418422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.418600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.418641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.418809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.418836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.418991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.419017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.419179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.419206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.419357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.419384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.419557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.419586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.419749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.419778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.419929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.419956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.420074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.420101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.420240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.420270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.420447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.420473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.420626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.420653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.420887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.420931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.421105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.421131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.421303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.421332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.421494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.421524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.421720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.421747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.421859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.421913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.422102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.422131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.422273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.422299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.422413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.422440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.422639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.422668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.422803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.422830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.423033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.423060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.423265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.423294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.423460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.423487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.423636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.728 [2024-07-15 10:06:18.423685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.728 qpair failed and we were unable to recover it. 00:33:01.728 [2024-07-15 10:06:18.423846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.423885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.424053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.424079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.424276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.424305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.424495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.424524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.424689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.424716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.424890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.424921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.425087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.425117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.425285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.425312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.425478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.425507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.425677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.425706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.425904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.425947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.426073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.426100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.426289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.426319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.426469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.426496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.426690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.426720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.426913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.426940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.427089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.427116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.427237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.427281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.427456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.427485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.427648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.427675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.427843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.427872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.428087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.428119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.428300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.428328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.428524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.428553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.428706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.428736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.428898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.428926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.429096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.429125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.429285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.429317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.429466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.429493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.429645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.429688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.429855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.429896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.430056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.430084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.430213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.430240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.430459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.430490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.430660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.430687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.430890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.430931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.431062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.431091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.431292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.431319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.729 [2024-07-15 10:06:18.431486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.729 [2024-07-15 10:06:18.431516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.729 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.431669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.431706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.431975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.432001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.432124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.432151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.432356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.432385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.432557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.432584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.432757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.432784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.432967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.432999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.433173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.433201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.433371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.433402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.433569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.433598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.433743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.433770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.433915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.433943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.434069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.434096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.434224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.434251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.434450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.434479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.434669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.434699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.434834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.434861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.435061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.435092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.435225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.435255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.435397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.435423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.435547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.435573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.435717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.435744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.435860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.435897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.436106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.436147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.436274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.436303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.436445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.436472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.436588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.436615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.436819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.436849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.437019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.437047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.437214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.437243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.437405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.437435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.437614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.437641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.437819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.437846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.438014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.438044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.438221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.438247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.438388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.438415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.438577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.438608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.438788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.438815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.438936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.438963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.439118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.439145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.439292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.439323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.439553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.439583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.439737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.439767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.730 [2024-07-15 10:06:18.439928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.730 [2024-07-15 10:06:18.439961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.730 qpair failed and we were unable to recover it. 00:33:01.731 [2024-07-15 10:06:18.440129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.731 [2024-07-15 10:06:18.440168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.731 qpair failed and we were unable to recover it. 00:33:01.731 [2024-07-15 10:06:18.440358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.731 [2024-07-15 10:06:18.440388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.731 qpair failed and we were unable to recover it. 00:33:01.731 [2024-07-15 10:06:18.440560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.731 [2024-07-15 10:06:18.440586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:01.731 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.440749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.440779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.440929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.440959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.441113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.441139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.441277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.441304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.441425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.441451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.441571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.441599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.441727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.441773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.441956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.441987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.442159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.442186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.442360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.442389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.442549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.442579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.442744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.442771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.442922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.442968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.443131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.443160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.443333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.443360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.443474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.443501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.443644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.443671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.443836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.443862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.444118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.444148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.444278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.444308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.444475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.444501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.444631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.444658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.444800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.444832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.444990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.445017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.445164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.445190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.445339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.445366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.445489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.445516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.445690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.445717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.445941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.445968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.446112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.446139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.446380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.446407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.446634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.446664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.446806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.020 [2024-07-15 10:06:18.446837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.020 qpair failed and we were unable to recover it. 00:33:02.020 [2024-07-15 10:06:18.446993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.447025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.447169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.447196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.447376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.447406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.447601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.447627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.447799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.447828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.448042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.448073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.448220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.448247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.448364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.448390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.448534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.448563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.448701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.448729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.448934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.448965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.449126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.449157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.449323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.449351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.449501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.449546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.449717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.449747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.449948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.449975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.450106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.450135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.450309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.450339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.450504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.450532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.450691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.450721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.450895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.450925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.451097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.451123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.451327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.451356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.451525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.451556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.451774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.451803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.452003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.452030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.452185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.452212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.452367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.452393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.452554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.452583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.452774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.452800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.452977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.453004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.453177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.453207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.453367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.453404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.453585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.453612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.453731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.453776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.453939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.453969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.454125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.454155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.454296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.454322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.454444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.454470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.454610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.454637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.454792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.454842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.455026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.455054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.455172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.455199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.455398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.455428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.455587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.455617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.455755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.455782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.455934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.455961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.456078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.456104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.456227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.456254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.456407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.456451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.456613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.456643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.456817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.456844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.456998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.457025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.457196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.457225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.457423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.457449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.457707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.457758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.457970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.457996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.458118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.458151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.458319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.458348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.458477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.458516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.458709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.458744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.458954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.458981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.459134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.459160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.459346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.459373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.459564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.459601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.459833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.459859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.460013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.460039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.460185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.460211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.460387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.460414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.460596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.460627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.460795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.460824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.460995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.461023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.461171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.461196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.461455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.461508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.021 qpair failed and we were unable to recover it. 00:33:02.021 [2024-07-15 10:06:18.461674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.021 [2024-07-15 10:06:18.461703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.461874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.461907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.462026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.462052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.462213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.462242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.462433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.462459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.462734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.462784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.462989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.463015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.463147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.463173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.463300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.463343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.463512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.463541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.463710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.463736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.463906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.463941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.464109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.464138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.464374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.464400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.464615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.464644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.464777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.464807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.465006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.465033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.465212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.465241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.465395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.465424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.465583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.465609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.465775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.465812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.465983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.466010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.466146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.466173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.466371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.466400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.466563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.466592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.466756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.466783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.466939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.466967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.467123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.467167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.467336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.467362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.467488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.467516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.467644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.467671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.467782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.467809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.467944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.467971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.468121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.468147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.468334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.468364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.468556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.468585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.468754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.468791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.468971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.468997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.469116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.469142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.469294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.469323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.469492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.469519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.469691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.469720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.469858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.469894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.470072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.470098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.470250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.470277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.470388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.470415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.470560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.470587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.470738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.470764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.470954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.470981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.471136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.471162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.471312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.471339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.471535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.471564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.471729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.471755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.471981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.472009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.472158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.472185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.472369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.472397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.472557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.472587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.472739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.472769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.472941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.472968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.473129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.473160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.473346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.473376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.473542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.473569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.473765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.473794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.473960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.473991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.474158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.474184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.474377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.474407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.474572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.474601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.474756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.474783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.022 [2024-07-15 10:06:18.474980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.022 [2024-07-15 10:06:18.475010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.022 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.475190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.475219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.475384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.475410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.475559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.475585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.475715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.475741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.475899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.475937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.476072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.476101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.476288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.476314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.476463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.476489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.476613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.476655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.476861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.476896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.477057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.477084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.477214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.477240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.477411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.477437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.477581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.477607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.477731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.477775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.477962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.477992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.478136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.478163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.478339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.478368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.478509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.478538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.478710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.478736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.478892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.478929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.479070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.479112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.479288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.479314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.479500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.479529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.479668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.479698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.479833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.479860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.480043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.480070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.480248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.480275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.480499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.480525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.480709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.480738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.480903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.480939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.481113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.481139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.481336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.481365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.481556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.481586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.481731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.481757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.481932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.481964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.482107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.482148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.482311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.482338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.482512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.482542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.482701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.482732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.482930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.482957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.483135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.483164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.483355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.483384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.483545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.483582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.483780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.483810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.483973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.484003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.484169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.484196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.484323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.484367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.484554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.484584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.484758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.484792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.484989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.485020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.485184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.485213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.485410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.485436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.485606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.485646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.485764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.485794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.485957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.485984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.486181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.486210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.486375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.486404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.486572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.486599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.486714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.486741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.486928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.486959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.487153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.487180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.487339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.487368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.487560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.487587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.487760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.487787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.487956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.487985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.488151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.488180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.488348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.488373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.488494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.488536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.488672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.488701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.488897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.488936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.489070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.489101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.489253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.489284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.489451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.489478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.489642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.489675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.489843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.489872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.490062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.490089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.023 [2024-07-15 10:06:18.490246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.023 [2024-07-15 10:06:18.490275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.023 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.490400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.490430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.490566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.490592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.490783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.490812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.490974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.491005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.491196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.491223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.491388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.491418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.491570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.491600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.491771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.491809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.491925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.491953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.492125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.492158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.493062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.493094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.493311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.493342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.493510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.493541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.493760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.493789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.493978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.494006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.494175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.494204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.494377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.494403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.494527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.494571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.494739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.494766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.494943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.494970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.495139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.495169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.495336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.495366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.495523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.495553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.495705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.495735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.495900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.495950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.496112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.496153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.496295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.496321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.496462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.496488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.496598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.496625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.496775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.496802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.497000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.497031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.497190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.497219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.497414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.497440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.497577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.497607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.497732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.497761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.497935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.497962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.498153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.498182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.498323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.498352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.498515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.498542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.498708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.498736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.498872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.498909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.499069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.499095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.499217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.499260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.499400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.499429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.499624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.499650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.499796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.499825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.499985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.500015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.500207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.500234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.500429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.500458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.500646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.500675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.500885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.500912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.501098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.501127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.501298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.501327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.501493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.501519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.501714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.501743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.501868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.501921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.502076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.502102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.502261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.502290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.502429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.502459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.502621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.502647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.502824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.502853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.503024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.503053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.503221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.503247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.503392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.503419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.503546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.503576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.503750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.503776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.503939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.503970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.504133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.504162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.504337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.504364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.504532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.504558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.504728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.504754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.504904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.504935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.505057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.505101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.505230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.505261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.505455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.505481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.505649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.505679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.505815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.505844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.024 [2024-07-15 10:06:18.506024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.024 [2024-07-15 10:06:18.506050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.024 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.506227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.506256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.506418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.506447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.506621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.506647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.506770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.506813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.506983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.507013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.507155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.507181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.507337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.507366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.507521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.507550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.507725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.507753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.507874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.507916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.508087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.508113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.508317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.508343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.508479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.508508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.508675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.508708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.508886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.508913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.509079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.509108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.509292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.509318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.509438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.509465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.509614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.509657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.509822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.509852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.510040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.510067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.510209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.510238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.510374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.510403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.510596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.510622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.510787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.510816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.510991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.511021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.511171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.511197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.511349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.511397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.511557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.511587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.511756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.511782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.511959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.511989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.512162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.512188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.512365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.512391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.512522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.512551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.512681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.512710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.512910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.512938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.513073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.513102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.513267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.513296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.513437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.513464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.513586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.513612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.513817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.513843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.513983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.514010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.514178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.514207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.514373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.514401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.514547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.514574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.514695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.514721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.514918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.514948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.515094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.515120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.515248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.515274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.515412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.515438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.515580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.515606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.515755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.515782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.515926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.515971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.516111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.516138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.516285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.516330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.516465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.516494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.516639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.516665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.516786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.516813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.517010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.517040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.517180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.517206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.517358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.517400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.517560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.517589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.517760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.517786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.517983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.518013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.518147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.518176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.518323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.518349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.518490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.518532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.518701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.518729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.518908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.518935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.519063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.519089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.519238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.025 [2024-07-15 10:06:18.519264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.025 qpair failed and we were unable to recover it. 00:33:02.025 [2024-07-15 10:06:18.519438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.519464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.519629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.519658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.519786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.519815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.519988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.520015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.520143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.520169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.520294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.520320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.520488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.520514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.520679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.520708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.520863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.520900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.521047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.521074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.521185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.521215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.521366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.521395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.521542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.521568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.521737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.521781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.521916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.521946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.522120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.522146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.522263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.522290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.522460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.522490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.522633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.522659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.522810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.522836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.523025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.523055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.523200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.523227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.523357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.523383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.523506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.523532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.523691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.523718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.523839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.523865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.523997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.524023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.524134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.524161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.524310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.524352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.524501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.524527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.524701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.524728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.524889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.524916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.525041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.525085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.525229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.525256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.525410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.525436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.525586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.525617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.525764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.525790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.525919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.525963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.526100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.526129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.526277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.526303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.526450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.526476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.526647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.526676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.526846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.526873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.527041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.527070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.527231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.527260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.527468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.527495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.527664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.527693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.527856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.527892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.528047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.528074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.528226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.528253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.528428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.528457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.528660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.528690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.528822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.528851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.529020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.529065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.529251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.529279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.529472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.529503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.529642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.529674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.529847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.529882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.530014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.530041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.530168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.530196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.530346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.530372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.530519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.530550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.530735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.530784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.530970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.530996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.531123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.531149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.531351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.531381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.531578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.531604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.531742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.531771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.531944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.531974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.532111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.532138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.532296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.532339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.532513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.532539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.532687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.532720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.532843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.532870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.533010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.533036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.533189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.533217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.533390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.026 [2024-07-15 10:06:18.533420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.026 qpair failed and we were unable to recover it. 00:33:02.026 [2024-07-15 10:06:18.533605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.533635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.533824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.533854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.534023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.534049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.534186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.534221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.534435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.534462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.534628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.534657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.534852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.534898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.535045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.535071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.535260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.535290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.535479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.535527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.535686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.535712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.535837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.535891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.536040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.536069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.536214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.536241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.536401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.536428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.536615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.536660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.536848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.536883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.537044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.537075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.537234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.537264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.537447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.537474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.537623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.537650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.537796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.537825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.537985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.538013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.538140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.538167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.538345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.538375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.538561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.538589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.538761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.538793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.538962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.538990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.539116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.539160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.539284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.539330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.539506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.539534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.539687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.539716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.539836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.539889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.540037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.540066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.540218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.540244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.540398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.540442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.540606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.540637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.540800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.540827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.540954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.540982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.541113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.541140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.541300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.541327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.541527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.541557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.541722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.541752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.541929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.541957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.542126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.542156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.542318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.542347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.542504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.542531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.542669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.542716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.542856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.542894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.543051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.543079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.543213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.543257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.543444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.543475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.543648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.543676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.543805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.543832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.543968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.543995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.544125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.544164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.544324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.544355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.544552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.544579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.544759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.544787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.544935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.544966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.545113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.545142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.545279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.545306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.545459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.545486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.545631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.545658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.545809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.545837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.545973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.546001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.546161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.546191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.546388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.546415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.546579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.546613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.546802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.027 [2024-07-15 10:06:18.546832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.027 qpair failed and we were unable to recover it. 00:33:02.027 [2024-07-15 10:06:18.546996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.547023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.547145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.547172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.547342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.547372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.547563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.547591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.547798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.547828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.548011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.548042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.548216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.548249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.548439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.548469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.548634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.548665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.548860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.548893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.549039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.549070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.549202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.549233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.549435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.549462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.549661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.549691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.549847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.549883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.550067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.550094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.550262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.550291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.550444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.550475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.550646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.550674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.550792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.550835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.550994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.551021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.551166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.551193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.551391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.551433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.551569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.551601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.551777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.551804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.551956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.551984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.552155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.552185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.552380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.552408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.552570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.552600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.552740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.552771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.552956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.552983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.553114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.553152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.553341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.553371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.553543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.553570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.553745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.553775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.553968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.553996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.554139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.554166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.554297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.554341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.554501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.554535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.554676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.554714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.554863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.554914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.555090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.555120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.555303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.555330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.555455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.555500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.555658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.555688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.555890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.555918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.556072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.556102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.556305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.556335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.556537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.556564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.556727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.556758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.556936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.556968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.557135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.557163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.557318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.557345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.557499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.557526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.557667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.557695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.557906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.557936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.558101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.558130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.558326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.558353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.558494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.558524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.558670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.558701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.558942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.558970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.559090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.559117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.559293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.559330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.559501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.559528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.559727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.559757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.559938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.559967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.560119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.560147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.560336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.560363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.560511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.560537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.560694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.560730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.560881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.560912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.561067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.561094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.561262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.561292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.561454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.561484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.561709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.561738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.561896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.561936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.562111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.562149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.562302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.028 [2024-07-15 10:06:18.562330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.028 qpair failed and we were unable to recover it. 00:33:02.028 [2024-07-15 10:06:18.562444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.562475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.562653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.562683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.562860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.562893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.563046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.563073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.563281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.563312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.563580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.563642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.563840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.563870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.564040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.564067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.564209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.564236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.564379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.564409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.564576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.564606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.564825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.564855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.565042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.565069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.565213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.565243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.565414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.565443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.565633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.565663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.565851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.565887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.566059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.566086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.566285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.566315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.566505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.566535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.566768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.566799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.566943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.566971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.567148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.567191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.567339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.567368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.567518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.567562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.567723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.567754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.567950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.567978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.568117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.568158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.568336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.568366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.568577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.568606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.568741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.568771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.568972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.568999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.569145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.569171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.569340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.569388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.569549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.569578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.569765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.569794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.569952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.569994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.570191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.570222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.570394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.570422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.570564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.570611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.570742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.570777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.570947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.570975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.571170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.571200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.571334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.571364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.571525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.571554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.571751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.571781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.571984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.572013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.572142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.572169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.572281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.572323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.572514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.572544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.572765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.572795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.572994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.573022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.573180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.573210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.573357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.573384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.573540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.573567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.573688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.573716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.573869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.573903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.574058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.574085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.574246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.574276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.574409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.574453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.574584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.574615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.574780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.574810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.574980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.575008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.575195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.575225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.575355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.575385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.575577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.575607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.575781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.575811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.575985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.576013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.576186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.576214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.576386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.029 [2024-07-15 10:06:18.576434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.029 qpair failed and we were unable to recover it. 00:33:02.029 [2024-07-15 10:06:18.576617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.576647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.576804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.576834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.577006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.577033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.577172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.577202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.577374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.577401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.577527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.577572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.577735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.577765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.577929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.577957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.578133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.578175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.578339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.578366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.578508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.578538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.578670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.578701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.578869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.578922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.579098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.579125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.579297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.579327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.579511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.579541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.579696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.579727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.579902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.579930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.580084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.580111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.580256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.580283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.580496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.580526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.580722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.580752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.580947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.580975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.581122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.581165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.581330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.581360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.581546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.581576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.581730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.581760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.581933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.581960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.582136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.582164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.582307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.582337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.582502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.582532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.582697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.582727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.582897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.582945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.583071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.583098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.583233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.583273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.583451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.583482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.583668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.583712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.583862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.583902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.584060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.584090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.584263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.584294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.584468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.584511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.584765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.584794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.584966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.584993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.585185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.585214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.585443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.585473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.585660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.585690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.585853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.585891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.586083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.586111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.586308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.586338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.586557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.586603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.586762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.586792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.586970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.586997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.587147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.587191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.587383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.587413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.587552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.587581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.587770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.587799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.587988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.588016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.588146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.588174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.588340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.588399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2056758 Killed "${NVMF_APP[@]}" "$@" 00:33:02.030 [2024-07-15 10:06:18.588600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.588649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.588819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.588845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.589003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:02.030 [2024-07-15 10:06:18.589032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:02.030 [2024-07-15 10:06:18.589206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.589233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:02.030 [2024-07-15 10:06:18.589369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.589413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:02.030 [2024-07-15 10:06:18.589556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.030 [2024-07-15 10:06:18.589586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.589743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.589773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.589953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.589981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.590102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.590129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.590356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.590403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.590550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.590577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.590729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.590756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.590908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.590936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.591083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.591110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.591233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.591260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.030 [2024-07-15 10:06:18.591428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.030 [2024-07-15 10:06:18.591454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.030 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.591632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.591660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.591811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.591838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.591971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.591999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.592130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.592178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.592330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.592364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.592571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.592600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.592739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.592769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.592914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.592942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.593089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.593116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2057312 00:33:02.031 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:02.031 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2057312 00:33:02.031 [2024-07-15 10:06:18.593341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.593389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2057312 ']' 00:33:02.031 [2024-07-15 10:06:18.593617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.593647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.031 [2024-07-15 10:06:18.593787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.593817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:02.031 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.031 [2024-07-15 10:06:18.594016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.594044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:02.031 [2024-07-15 10:06:18.594191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.594220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.031 [2024-07-15 10:06:18.594361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.594391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.594835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.594868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.595044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.595072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.595249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.595278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.595439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.595488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.595722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.595750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.595933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.595961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.596110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.596138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.596332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.596359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.596510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.596537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.596705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.596733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.596888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.596916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.597039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.597066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.597228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.597257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.597427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.597458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.597611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.597642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.597809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.597838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.598016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.598043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.598211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.598240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.598428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.598477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.598663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.598693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.598856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.598903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.599044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.599070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.599244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.599270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.599434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.599464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.599677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.599713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.599864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.599898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.600028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.600054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.600221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.600252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.600413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.600444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.600634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.600665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.600833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.600861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.601034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.601060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.601224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.601253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.601442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.601489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.601686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.601716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.601893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.601938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.602086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.602113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.602277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.602304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.602467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.602498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.602631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.602661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.602840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.602867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.603042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.603072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.603236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.603265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.603400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.603427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.603599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.603641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.603780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.603809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.603958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.603985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.604176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.604205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.604337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.604366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.604560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.604588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.604753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.604782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.604936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.604966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.605151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.605186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.605363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.605392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.605633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.605663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.031 qpair failed and we were unable to recover it. 00:33:02.031 [2024-07-15 10:06:18.605832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.031 [2024-07-15 10:06:18.605859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.606070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.606100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.606245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.606275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.606504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.606530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.606682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.606714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.606874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.606916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.607083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.607110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.607303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.607332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.607492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.607520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.607683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.607709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.607912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.607942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.608081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.608110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.608256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.608282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.608427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.608453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.608644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.608673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.608826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.608852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.609006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.609034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.609192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.609221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.609391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.609418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.609537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.609578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.609716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.609746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.609897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.609925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.610043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.610070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.610215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.610241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.610393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.610420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.610607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.610637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.610789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.610818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.611000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.611038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.611206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.611236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.611474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.611505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.611700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.611727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.611929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.611961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.612138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.612169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.612347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.612374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.612527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.612553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.612730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.612760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.612936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.612964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.613135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.613165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.613355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.613386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.613558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.613584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.613711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.613738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.613945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.613976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.614171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.614200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.614398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.614428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.614558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.614587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.614752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.614783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.614946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.614976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.615111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.615140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.615312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.615339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.615534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.615563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.615723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.615752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.615926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.615952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.616063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.616105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.616245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.616276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.616477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.616504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.616631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.616658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.616807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.616833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.616972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.616999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.617114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.617140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.617320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.617349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.617514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.617541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.617701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.617731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.617899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.617948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.618094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.618120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.618241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.618268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.618465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.618493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.618659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.618685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.618832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.618859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.618987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.619014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.619160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.619186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.619357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.619384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.619613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.619641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.619787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.619814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.619988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.620018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.620174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.032 [2024-07-15 10:06:18.620203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.032 qpair failed and we were unable to recover it. 00:33:02.032 [2024-07-15 10:06:18.620371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.620400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.620526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.620570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.620692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.620720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.620897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.620926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.621044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.621070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.621218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.621244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.621487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.621513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.621693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.621721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.621884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.621913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.622052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.622078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.622228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.622258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.622427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.622455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.622636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.622663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.622792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.622818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.622958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.622985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.623165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.623191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.623326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.623353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.623486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.623513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.623647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.623673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.623822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.623865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.623995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.624022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.624161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.624188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.624420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.624447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.624597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.624625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.624767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.624794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.624939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.624982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.625212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.625239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.625428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.625454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.625620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.625648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.625768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.625795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.625939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.625966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.626112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.626155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.626272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.626300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.626461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.626488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.626675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.626703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.626860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.626894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.627035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.627062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.627241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.627268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.627421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.627449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.627590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.627617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.627763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.627790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.627939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.627967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.628116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.628144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.628296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.628323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.628476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.628502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.628648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.628674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.628828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.628854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.628988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.629015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.629166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.629192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.629330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.629356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.629527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.629557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.629705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.629731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.629857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.629903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.630078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.630104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.630224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.630251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.630474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.630500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.630640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.630667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.630818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.630845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.631029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.631057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.631173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.631199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.631319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.631345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.631489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.631515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.631644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.631671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.631816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.631843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.632009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.632036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.632188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.632214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.632355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.632381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.632556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.632583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.632694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.632720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.632868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.632901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.633045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.633072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.633221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.633247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.633422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.633448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.633626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.633652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.633805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.633831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.633993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.634020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.634246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.634271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.033 [2024-07-15 10:06:18.634443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.033 [2024-07-15 10:06:18.634469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.033 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.634620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.634645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.634758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.634784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.634940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.634968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.635097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.635124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.635275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.635301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.635475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.635501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.635648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.635674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.635815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.635841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.636000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.636028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.636183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.636209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.636336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.636362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.636513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.636540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.636711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.636744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.636893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.636921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.637041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.637067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.637228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.637254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.637407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.637433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.637594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.637620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.637770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.637797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.637974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.638000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.638147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.638173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.638344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.638370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.638517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.638543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.638658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.638684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.638834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.638860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.639017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.639044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.639221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.639247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.639377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.639404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.639551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.639577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.639692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.639718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.639895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.639922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.640049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.640075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.640191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.640217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.640365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.640391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.640566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.640592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.640751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.640777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.640909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.640948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.641095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.641121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.641272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.641298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.641448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.641474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.641649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.641675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.641795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.641821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.641974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.642002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.642154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.642180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.642330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.642357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.642485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.642511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.642683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.642709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.642831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.642858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.643044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.643071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.643222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.643249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.643401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.643428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.643580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.643606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.643728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.643759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.643761] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:02.034 [2024-07-15 10:06:18.643847] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:02.034 [2024-07-15 10:06:18.643891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.643920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.644096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.644121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.644272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.644297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.644445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.644471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.644587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.644614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.644775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.644803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.644940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.644968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.645093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.645120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.645266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.645293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.645416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.645443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.645595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.645622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.645744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.645771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.645924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.645952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.646071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.646098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.646229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.646256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.646430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.646457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.646615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.034 [2024-07-15 10:06:18.646643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.034 qpair failed and we were unable to recover it. 00:33:02.034 [2024-07-15 10:06:18.646761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.646788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.646964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.646992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.647146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.647173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.647317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.647344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.647513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.647541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.647691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.647719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.647894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.647922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.648072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.648098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.648251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.648278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.648421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.648448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.648621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.648648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.648800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.648827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.648952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.648980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.649129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.649156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.649314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.649341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.649457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.649484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.649610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.649638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.649783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.649810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.649958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.649986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.650160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.650186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.650338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.650365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.650518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.650548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.650670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.650696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.650825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.650853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.651005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.651032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.651201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.651228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.651408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.651434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.651584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.651611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.651737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.651764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.651915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.651942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.652058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.652084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.652230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.652257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.652408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.652434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.652580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.652607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.652786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.652813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.652949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.652976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.653159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.653186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.653306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.653333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.653480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.653507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.653661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.653688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.653812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.653840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.653973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.654000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.654115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.654142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.654289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.654317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.654490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.654517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.654643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.654670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.654782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.654809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.654954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.654982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.655138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.655166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.655280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.655306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.655452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.655479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.655652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.655679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.655846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.655873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.655999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.656026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.656176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.656203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.656355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.656382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.656537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.656563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.656712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.656739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.656886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.656913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.657056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.657083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.657234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.657260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.657429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.657460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.657583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.657610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.657759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.657785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.657940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.657967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.658116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.658143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.658287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.658314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.658498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.658525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.658697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.658724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.658867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.658900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.659027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.659055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.659200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.659227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.659402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.659428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.659575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.659602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.659753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.659780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.659932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.659960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.660112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.660138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.035 [2024-07-15 10:06:18.660293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.035 [2024-07-15 10:06:18.660319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.035 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.660460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.660487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.660638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.660664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.660787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.660813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.660970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.660997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.661158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.661184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.661333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.661359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.661487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.661515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.661662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.661688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.661829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.661856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.662046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.662072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.662214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.662240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.662395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.662422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.662566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.662592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.662719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.662745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.662907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.662934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.663074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.663100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.663210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.663236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.663381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.663408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.663585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.663611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.663784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.663810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.663948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.663974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.664122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.664149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.664273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.664299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.664441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.664473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.664614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.664640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.664785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.664812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.664970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.664997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.665143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.665169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.665331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.665358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.665526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.665552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.665734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.665760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.665907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.665934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.666078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.666105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.666249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.666275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.666424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.666450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.666604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.666630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.666773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.666799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.666953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.666981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.667110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.667137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.667263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.667289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.667412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.667441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.667583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.667609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.667751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.667778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.667934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.667961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.668134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.668160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.668344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.668370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.668517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.668543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.668690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.668716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.668834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.668861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.669010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.669036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.669191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.669218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.669394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.669421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.669594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.669621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.669784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.669810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.669963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.669991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.670128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.670154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.670307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.670334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.670461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.670487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.670637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.670664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.670785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.670811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.670924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.670951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.671100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.671127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.671274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.671301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.671427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.671459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.671608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.671635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.671811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.671838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.671995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.672021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.672191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.672217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.672331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.672357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.672499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.036 [2024-07-15 10:06:18.672525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.036 qpair failed and we were unable to recover it. 00:33:02.036 [2024-07-15 10:06:18.672649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.672677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.672817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.672843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.673019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.673047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.673205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.673246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.673404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.673432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.673600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.673627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.673777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.673804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.673997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.674025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.674156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.674184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.674337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.674364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.674519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.674546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.674670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.674697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.674828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.674856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.674991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.675019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.675187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.675213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.675386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.675412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.675560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.675586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.675711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.675737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.675912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.675940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.676080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.676109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.676242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.676269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.676415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.676442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.676616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.676643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.676763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.676789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.676938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.676965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.677119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.677145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.677292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.677319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.677492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.677518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.677647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.677673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.677826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.677853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.678031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.678072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.678200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.678228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.678397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.678425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.678601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.678634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.678791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.678818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.678969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.678996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.679121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.679148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.679302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.679330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.679483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.679510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.679640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.679668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.679821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.679848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.680025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.680053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.680228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.680255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.680402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.680429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.680589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.680616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.680751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.680778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.680929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.680957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.681117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.681145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.681271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.681298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.681451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.681479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.681650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.681678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.681822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.681850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.682003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.682031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.682182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.682209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.682363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.682391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.682517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.682543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.682683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.682710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.682862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.682894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.683048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.683074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.683198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.683224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.683391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.683418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.683570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.683597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.683743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.683769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.683948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.683975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.684097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.684123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.684288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.684314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.684434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.684460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.684632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.684658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.684823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.684849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.685030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.685057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.685208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.685234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.685366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.685392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.685510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.685536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.685661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.685692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 EAL: No free 2048 kB hugepages reported on node 1 00:33:02.037 [2024-07-15 10:06:18.685847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.685874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.686011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.686038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.686164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.686191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.037 qpair failed and we were unable to recover it. 00:33:02.037 [2024-07-15 10:06:18.686313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.037 [2024-07-15 10:06:18.686341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.686489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.686516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.686699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.686725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.686874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.686907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.687058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.687085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.687195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.687221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.687368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.687394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.687548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.687574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.687701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.687728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.687853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.687886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.688024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.688052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.688178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.688204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.688368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.688394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.688510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.688538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.688666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.688692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.688843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.688869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.689002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.689028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.689150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.689177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.689339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.689367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.689499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.689526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.689658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.689685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.689809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.689836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.689977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.690004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.690131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.690169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.690291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.690319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.690465] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no suppo[2024-07-15 10:06:18.690472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 rt for it in SPDK. Enabled only for validation. 00:33:02.038 [2024-07-15 10:06:18.690503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.690668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.690696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.690825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.690852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.691009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.691038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.691187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.691214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.691350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.691377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.691524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.691551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.691725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.691753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.691903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.691931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.692107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.692145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.692265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.692293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.692444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.692472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.692588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.692616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.692765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.692792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.692916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.692943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.693091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.693117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.693298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.693325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.693474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.693501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.693653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.693680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.693805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.693832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.693984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.694010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.694132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.694160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.694281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.694309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.694460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.694487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.694607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.694635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.694758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.694785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.694935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.694962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.695104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.695130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.695274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.695302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.695451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.695478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.695622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.695650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.695769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.695796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.695926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.695954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.696105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.696142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.696257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.696284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.696403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.696431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.696579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.696606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.696739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.696771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.696899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.696934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.697052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.697079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.697196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.697223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.697372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.697399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.697569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.697596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.697718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.697745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.697871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.697904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.698084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.698111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.038 qpair failed and we were unable to recover it. 00:33:02.038 [2024-07-15 10:06:18.698235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.038 [2024-07-15 10:06:18.698262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.698407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.698440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.698584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.698611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.698729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.698755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.698927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.698954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.699102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.699128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.699266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.699293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.699456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.699483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.699634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.699662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.699810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.699838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.699967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.699995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.700142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.700169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.700319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.700346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.700459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.700488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.700608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.700635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.700764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.700793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.700936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.700964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.701078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.701105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.701278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.701320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.701478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.701508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.701659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.701687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.701859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.701896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.702081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.702108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.702238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.702265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.702387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.702414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.702566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.702593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.702763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.702790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.702949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.702977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.703096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.703122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.703262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.703289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.703437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.703466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.703638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.703669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.703811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.703839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.703976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.704005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.704136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.704165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.704311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.704339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.704487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.704515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.704667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.704695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.704840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.704868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.705033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.705059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.705221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.705249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.705396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.705424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.705551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.705579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.705755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.705783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.705928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.705956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.706101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.706129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.706281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.706309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.706485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.706512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.706659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.706686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.706833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.706860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.707036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.707064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.707206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.707233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.707378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.707406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.707526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.707554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.707702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.707730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.707903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.707933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.708055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.708082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.708219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.708248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.708449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.708492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.708665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.708694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.708871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.708905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.709042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.709069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.709201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.709228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.709356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.709383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.709527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.709556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.709670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.709698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.709830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.709857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.710002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.710028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.710173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.710201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.710352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.710379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.710534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.710562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.710738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.710766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.710936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.710976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.711129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.711157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.711310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.711337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.711457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.711485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.711636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.711663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.711789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.039 [2024-07-15 10:06:18.711816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.039 qpair failed and we were unable to recover it. 00:33:02.039 [2024-07-15 10:06:18.711959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.711986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.712162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.712189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.712309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.712336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.712461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.712488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.712636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.712663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.712797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.712823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.712976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.713002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.713161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.713201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.713331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.713360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.713525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.713553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.713683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.713711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.713864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.713899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.714035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.714061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.714209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.714236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.714410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.714437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.714563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.714591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.714710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.714738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.714966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.714994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.715120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.715148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.715322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.715350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.715499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.715527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.715682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.715709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.715843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.715894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.716016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.716045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.716200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.716227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.716412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.716439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.716566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.716593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.716717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.716744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.716862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.716895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.717026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.717053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.717209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.717236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.717388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.717414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.717600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.717627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.717755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.717782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.717931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.717972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.718154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.718182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.718333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.718361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.718510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.718537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.718667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.718694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.718840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.718868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.719005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.719033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.719206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.719233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.719383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.719411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.719541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.719569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.719721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.719749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.719883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.719911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.720050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.720078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.720230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.720263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.720388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.720415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.720533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.720562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.720708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.720735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.720925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.720966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.721134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.721175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.721330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.721359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.721511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.721538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.721690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.721717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.721888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.721916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.722068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.722096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.722226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.722253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.722374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.722402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.722558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.722585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.722713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.722741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 [2024-07-15 10:06:18.722741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.722886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.722913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.723038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.723066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.723245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.723273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.723397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.723425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.723571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.723598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.723744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.723771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.723896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.723923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.724062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.724089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.724240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.724268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.724389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.724417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.724557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.724584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.724711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.724738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.724868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.724904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.725024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.725051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.040 qpair failed and we were unable to recover it. 00:33:02.040 [2024-07-15 10:06:18.725189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.040 [2024-07-15 10:06:18.725217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.725387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.725414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.725529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.725556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.725681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.725707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.725859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.725893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.726017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.726045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.726162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.726189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.726335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.726362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.726522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.726551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.726695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.726722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.726864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.726898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.727042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.727077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.727201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.727228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.727379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.727406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.727549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.727576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.727730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.727757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.727912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.727940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.728068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.728095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.728209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.728237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.728390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.728417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.728571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.728598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.728774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.728802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.728919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.728947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.729066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.729093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.729238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.729265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.729423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.729451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.729575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.729603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.729744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.729771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.729913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.729942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.730137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.730164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.730316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.730343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.730468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.730495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.730676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.730703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.730832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.730860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.731017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.731045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.731195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.731222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.731369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.731396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.731545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.731571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.731704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.731731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.731874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.731907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.732061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.732089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.732282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.732310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.732487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.732515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.732663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.732690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.732831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.732858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.733025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.733070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.733231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.733260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.733410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.733438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.733638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.733666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.733783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.733811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.733967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.733996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.734125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.734159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.734289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.734316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.734487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.734515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.734667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.734696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.734848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.734880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.735033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.735060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.735211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.735238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.735368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.735395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.735515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.735541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.735668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.735696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.735846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.735873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.736013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.736041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.736197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.736224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.736342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.736369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.736530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.736557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.736719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.736746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.736869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.736902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.737025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.737055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.737198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.737226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.737376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.737404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.041 qpair failed and we were unable to recover it. 00:33:02.041 [2024-07-15 10:06:18.737555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.041 [2024-07-15 10:06:18.737582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.737693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.737720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.737911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.737940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.738087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.738115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.738239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.738267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.738417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.738445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.738594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.738622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.738775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.738802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.738926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.738954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.739079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.739106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.739253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.739280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.739456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.739483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.739637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.739664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.739791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.739819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.739961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.739990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.740109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.740138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.740285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.740313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.740443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.740472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.740619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.740647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.740818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.740846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.740978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.741011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.741135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.741164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.741312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.741340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.741460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.741487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.741662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.741690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.741837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.741865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.741997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.742025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.742198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.742226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.742355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.742384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.742505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.742533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.742702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.742730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.742888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.742918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.743041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.743069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.743244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.743272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.743427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.743455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.743587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.743615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.743761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.743790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.743950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.743980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.744106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.744134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.744261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.744290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.744420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.744449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.744575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.744602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.744756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.744785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.744939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.744992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.745144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.745173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.745297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.745325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.745476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.745504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.745662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.745690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.745813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.745842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.745989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.746017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.746173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.746201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.746320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.746348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.746516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.746544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.746692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.746720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.746843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.746871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.747001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.747030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.747179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.747208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.747360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.747388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.747530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.747559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.747704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.747732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.747866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.747907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.748040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.748069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.748211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.748239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.748389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.748417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.748593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.748621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.748742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.748770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.748919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.748948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.749101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.749129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.749302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.749330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.749455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.749483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.749639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.749666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.749817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.749845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.749966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.749995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.750106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.750134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.750291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.750319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.750439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.750467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.750586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.750614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.042 qpair failed and we were unable to recover it. 00:33:02.042 [2024-07-15 10:06:18.750761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.042 [2024-07-15 10:06:18.750789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.750942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.750972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.751149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.751177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.751324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.751352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.751477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.751506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.751677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.751705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.751825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.751853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.752002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.752031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.752181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.752209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.752363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.752391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.752539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.752568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.752688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.752716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.752856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.752898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.753056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.753083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.753219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.753247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.753395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.753424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.753569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.753597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.753780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.753808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.753983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.754011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.754131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.754160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.754301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.754329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.754457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.754486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.754638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.754666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.754815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.754847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.754973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.755001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.755153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.755181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.755331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.755360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.755491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.755519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.755669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.755697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.755868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.755901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.756045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.756073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.756198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.756226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.756369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.756397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.756518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.756546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.756694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.756722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.756873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.756917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.757068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.757097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.757244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.757272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.757402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.757432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.757552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.757581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.757723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.757750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.757903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.757933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.758059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.758087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.758233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.758261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.758435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.758463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.758606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.758634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.758753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.758781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.758897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.758925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.759047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.759075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.759262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.759290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.759422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.759450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.759591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.759619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.759789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.759817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.759960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.759988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.760105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.760133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.760274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.760301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.043 [2024-07-15 10:06:18.760455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.043 [2024-07-15 10:06:18.760483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.043 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.760651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.760679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.760826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.760854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.761015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.761044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.761225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.761260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.761414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.761442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.761589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.761617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.761767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.761798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.761931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.761961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.762126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.762153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.762327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.762355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.762538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.762565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.762710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.762737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.762857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.762890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.763070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.763098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.763271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.763299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.763471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.763499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.763652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.763681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.763834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.763862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.763992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.764020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.764156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.764184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.764346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.764374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.764497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.764525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.764682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.764710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.764864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.764905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.765060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.765088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.765241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.765268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.765391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.765419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.765545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.765573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.765721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.765749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.765921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.765949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.766096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.766124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.766241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.766270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.766395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.766424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.766574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.766602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.766777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.766805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.766981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.767009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.767121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.767150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.767263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.767291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.767408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.767437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.767578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.767607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.767731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.767760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.767887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.767915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.768046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.768074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.768222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.768249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.768376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.768404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.768576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.768603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.768716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.768748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.768888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.768917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.769041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.769068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.769220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.769247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.769375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.769402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.769577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.769605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.769764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.769792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.769906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.769934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.770082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.770109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.770230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.770257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.770429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.770457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.770597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.770624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.770795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.770822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.770979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.771007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.771183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.771210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.771393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.771420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.771567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.771596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.771722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.771749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.771892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.771919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.772069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.772096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.772250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.772278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.772425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.772453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.772606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.772633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.772772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.772799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.772945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.772974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.773121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.773149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.773340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.773367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.773522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.773550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.773727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.773754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.773886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.773914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.044 [2024-07-15 10:06:18.774032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.044 [2024-07-15 10:06:18.774059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.044 qpair failed and we were unable to recover it. 00:33:02.045 [2024-07-15 10:06:18.774178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.045 [2024-07-15 10:06:18.774206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.045 qpair failed and we were unable to recover it. 00:33:02.045 [2024-07-15 10:06:18.774357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.045 [2024-07-15 10:06:18.774384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.045 qpair failed and we were unable to recover it. 00:33:02.045 [2024-07-15 10:06:18.774532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.045 [2024-07-15 10:06:18.774559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.045 qpair failed and we were unable to recover it. 00:33:02.045 [2024-07-15 10:06:18.774711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.774739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.774892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.774921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.775069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.775097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.775252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.775280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.775429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.775457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.775605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.775632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.775760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.775792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.775961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.775989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.776139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.776167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.776315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.776342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.776493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.776520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.776657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.776684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.776863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.776895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.777094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.777121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.777272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.777301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.777431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.777458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.777648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.777675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.777821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.777848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.777976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.778004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.778154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.778181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.778308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.778335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.778492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.778519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.778694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.778721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.778842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.778869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.778998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.779027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.779144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.779173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.779294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.779321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.779442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.779469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.779622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.779649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.779764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.779792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.779937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.779966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.780090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.780118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.780240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.780268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.780415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.780443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.780567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.780594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.780714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.780741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.780904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.780933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.781055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.781083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.781194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.781221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.781364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.781391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.781533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.781560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.781717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.781745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.781861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.781894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.782038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.782065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.782212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.782241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.782388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.782415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.782587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.782618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.782767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.782794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.322 [2024-07-15 10:06:18.782938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.322 [2024-07-15 10:06:18.782966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.322 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.783138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.783165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.783284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.783311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.783460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.783487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.783611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.783639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.783784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.783812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.783969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.783997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.784126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.784153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.784273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.784301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.784417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.784445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.784589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.784617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.784758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.784785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.784931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.784959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.785105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.785133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.785261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.785288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.785464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.785491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.785607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.785634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.785787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.785816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.785960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.785989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.786155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.786183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.786312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.786340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.786465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.786493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.786643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.786671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.786817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.786845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.787004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.787032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.787184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.787214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.787337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.787364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.787508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.787536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.787686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.787713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.787832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.787860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.788023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.788052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.788173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.788201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.788353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.788381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.788529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.788558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.788703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.788732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.788855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.788906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.789039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.789067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.789234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.789260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.789432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.789464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.789586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.789613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.789760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.789788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.789911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.789941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.790084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.790112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.790265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.790292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.790410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.790439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.790615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.790642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.790760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.790788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.790907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.790936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.791082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.791110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.791238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.791267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.791437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.791465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.791590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.791618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.791744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.791773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.791950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.791978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.792126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.792153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.792298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.792327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.792477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.792506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.792671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.792699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.792842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.792869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.793050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.793078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.793236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.323 [2024-07-15 10:06:18.793264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.323 qpair failed and we were unable to recover it. 00:33:02.323 [2024-07-15 10:06:18.793411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.793439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.793584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.793612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.793732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.793761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.793885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.793913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.794038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.794066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.794237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.794265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.794411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.794439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.794604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.794632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.794788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.794816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.794981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.795010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.795139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.795167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.795290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.795318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.795469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.795497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.795664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.795692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.795854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.795888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.796015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.796043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.796194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.796222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.796392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.796424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.796574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.796602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.796780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.796808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.796951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.796980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.797135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.797163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.797280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.797308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.797453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.797481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.797602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.797631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.797785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.797812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.797954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.797983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.798122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.798150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.798327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.798356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.798475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.798504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.798628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.798656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.798807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.798834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.798993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.799022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.799143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.799172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.799292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.799320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.799454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.799482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.799632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.799660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.799812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.799840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.800000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.800028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.800197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.800225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.800354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.800382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.800501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.800530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.800680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.800708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.800897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.800926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.801093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.801122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.801269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.801297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.801444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.801472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.801624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.801653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.801773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.801801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.801976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.802005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.802127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.802155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.802304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.802332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.802486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.802514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.802659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.802687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.802802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.802830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.802973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.803001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.803124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.803152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.803299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.803331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.803480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.803508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.803656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.324 [2024-07-15 10:06:18.803684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.324 qpair failed and we were unable to recover it. 00:33:02.324 [2024-07-15 10:06:18.803828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.803857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.804008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.804036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.804210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.804238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.804389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.804417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.804530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.804558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.804742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.804770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.804923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.804952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.805098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.805125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.805267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.805295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.805422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.805450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.805601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.805629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.805786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.805813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.805936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.805963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.806143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.806170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.806321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.806349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.806474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.806503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.806624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.806652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.806817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.806844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.807031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.807060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.807171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.807199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.807307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.807336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.807522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.807550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.807697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.807725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.807870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.807903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.808061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.808090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.808266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.808294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.808405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.808434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.808579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.808607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.808727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.808755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.808873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.808906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.809029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.809057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.809194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.809222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.809344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.809373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.809501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.809534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.809681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.809709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.809833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.809863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.810033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.810062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.810182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.810210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.810344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.810372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.810515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.810543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.810729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.810757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.810888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.810917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.811101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.811129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.811305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.811333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.811452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.811480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.811592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.811619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.811733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.811762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.811917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.811945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.812092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.812120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.812243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.812272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.812422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.812451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.812627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.325 [2024-07-15 10:06:18.812655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.325 qpair failed and we were unable to recover it. 00:33:02.325 [2024-07-15 10:06:18.812799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.812827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.812979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.813007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.813137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.813166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.813316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.813344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.813489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.813517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.813643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.813671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.813813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.813842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.813963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.813991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.814120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.814149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.814327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.814355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.814531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.814559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.814685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.814714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.814857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.814894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.815048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.815075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.815203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.815231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.815358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.815386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.815555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.815582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.815727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.815755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.815900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.815928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.816080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.816108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.816258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.816285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.816412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.816439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.816592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.816619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.816762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.816789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.816909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.816938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.817085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.817113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.817265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.817293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.817449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.817476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.817620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.817656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.817782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.817810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.817970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.818012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.818171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.818213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.818344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.818374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.818490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.818519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.818650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.818678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.818800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.818830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.818962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.818990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.819102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.819130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.819249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.819253] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.326 [2024-07-15 10:06:18.819276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0260000b90 with addr=10.0.0.2, port=4420 00:33:02.326 [2024-07-15 10:06:18.819287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.819302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.326 [2024-07-15 10:06:18.819315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.326 [2024-07-15 10:06:18.819325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.326 [2024-07-15 10:06:18.819402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.819430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.819398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:02.326 [2024-07-15 10:06:18.819447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:02.326 [2024-07-15 10:06:18.819578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.819478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:02.326 [2024-07-15 10:06:18.819480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:02.326 [2024-07-15 10:06:18.819605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.819731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.819756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.819886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.819925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.820079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.820106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.820236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.820263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.820402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.820429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.820549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.820576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.820731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.820759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.820890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.820927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.821071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.821111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.821251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.821279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.821413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.821441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.821566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.821594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.821723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.821751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.821894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.821933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.822083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.822110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.822258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.822285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.326 [2024-07-15 10:06:18.822409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.326 [2024-07-15 10:06:18.822437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.326 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.822584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.822612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.822765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.822793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.822931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.822958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.823081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.823107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.823240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.823269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.823425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.823453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.823568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.823596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.823741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.823768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.823884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.823912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.824022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.824048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.824162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.824189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.824310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.824338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.824483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.824510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.824629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.824656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.824778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.824806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.824945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.824972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.825089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.825116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.825269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.825297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.825433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.825462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.825622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.825650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.825775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.825803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.825941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.825969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.826102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.826129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.826287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.826315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.826433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.826461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.826583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.826611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.826766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.826794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.826952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.826980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.827108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.827135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.827269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.827296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.827449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.827477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.827626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.827653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.827786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.827813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.827969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.827998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.828114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.828141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.828288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.828315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.828429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.828457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.828638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.828681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.828862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.828901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.829037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.829064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.829198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.829226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.829375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.829404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.829529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.829557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.829685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.829713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.829825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.829853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.829990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.830016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.830176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.830203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.830332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.830359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.830487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.830514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.830677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.830705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.830827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.830854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.831018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.831046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.831166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.831192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.831338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.831364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.831491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.831518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.831656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.831685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.327 qpair failed and we were unable to recover it. 00:33:02.327 [2024-07-15 10:06:18.831835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.327 [2024-07-15 10:06:18.831862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.832000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.832037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.832156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.832183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.832337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.832364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.832503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.832530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.832695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.832721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.832873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.832910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.833047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.833073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.833186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.833212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.833349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.833376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.833527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.833554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.833700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.833726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.833841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.833867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.834046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.834072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.834245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.834272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.834447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.834474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.834601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.834632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.834826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.834853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.835020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.835047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.835196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.835223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.835398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.835425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.835563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.835589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.835708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.835736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.835892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.835928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.836076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.836102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.836229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.836256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.836416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.836443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.836556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.836583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.836727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.836754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.836907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.836938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.837103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.837130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.837281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.837308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.837497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.837525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.837638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.837665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.837785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.837813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.837973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.838000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.838115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.838153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.838348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.838375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.838506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.838533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.838680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.838707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.838831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.838858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.839029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.839056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.839187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.839215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.839343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.839374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.839522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.839548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.839665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.839692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.839816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.839843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.839979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.840006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.840125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.840152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.840287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.840314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.840446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.840473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.840623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.840652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.840805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.840832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.840961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.840988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.841106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.841133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.841251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.841278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.841401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.841428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.841605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.841632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.841742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.841768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.328 [2024-07-15 10:06:18.841908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.328 [2024-07-15 10:06:18.841936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.328 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.842047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.842073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.842205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.842231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.842372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.842399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.842519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.842546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.842655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.842682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.842800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.842827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.842973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.843000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.843122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.843155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.843279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.843306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.843427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.843454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.843598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.843626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.843765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.843792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.843924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.843950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.844078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.844104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.844279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.844306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.844460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.844487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.844604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.844631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.844762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.844789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.844948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.844976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.845096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.845123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.845238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.845265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.845380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.845407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.845520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.845548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.845695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.845722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.845835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.845869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.845998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.846025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.846155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.846183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.846320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.846347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.846472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.846498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.846622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.846649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.846779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.846807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.846933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.846961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.847077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.847104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.847231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.847258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.847395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.847422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.847551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.847580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.847730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.847757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.847908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.847953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.848130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.848160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.848316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.848345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.848468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.848498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.848630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.848658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.848817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.848845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.849025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.849054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.849180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.849207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.849323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.849350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.849545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.849572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.849691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.849718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.849861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.849894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.850017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.850044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.850162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.850189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.850305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.850335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.850457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.850484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.850679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.850706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.850831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.850858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.851006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.851035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.851170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.851199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.851347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.851375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.851534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.851563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.851699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.851728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.329 qpair failed and we were unable to recover it. 00:33:02.329 [2024-07-15 10:06:18.851886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.329 [2024-07-15 10:06:18.851915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.852070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.852097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.852243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.852272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.852426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.852455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.852587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.852615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.852743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.852771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.852916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.852944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.853074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.853102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.853227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.853255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.853400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.853428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.853552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.853579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.853756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.853785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.853940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.853968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.854092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.854119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.854240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.854268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.854424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.854452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.854577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.854606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.854754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.854781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.854932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.854975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.855100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.855129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.855251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.855279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.855394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.855423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.855551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.855579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.855703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.855731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.855857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.855893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.856028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.856056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.856257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.856285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.856438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.856464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.856604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.856632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.856753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.856780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.856910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.856939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.857094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.857127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.857286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.857315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.857453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.857481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.857653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.857682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.857873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.857907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.858056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.858084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.858235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.858263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.858411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.858439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.858552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.858580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.858696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.858724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.858869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.858903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.859056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.859083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.859205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.859233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.859384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.859412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.859581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.859610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.859763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.859791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.859933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.859962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.860106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.860133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.860285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.860313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.860455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.860483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.860615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.860643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.860797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.860826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.860962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.330 [2024-07-15 10:06:18.860991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.330 qpair failed and we were unable to recover it. 00:33:02.330 [2024-07-15 10:06:18.861111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.861139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.861304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.861332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.861450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.861478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.861603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.861631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.861860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.861896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.862032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.862060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.862209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.862238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.862365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.862392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.862547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.862575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.862720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.862747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.862871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.862904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.863049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.863076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.863204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.863231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.863377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.863405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.863515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.863543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.863702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.863729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.863883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.863912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.864036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.864068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.864215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.864244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.864387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.864415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.864621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.864649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.864806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.864835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.865028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.865057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.865178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.865206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.865350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.865377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.865509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.865538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.865690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.865718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.865898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.865926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.866093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.866121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.866312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.866340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.866459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.866488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.866645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.866673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.866787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.866815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.866963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.866992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.867121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.867149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.867295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.867323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.867472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.867500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.867645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.867673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.867814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.867841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.867962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.867990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.868188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.868216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.868363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.868391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.868534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.868562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.868737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.868765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.868898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.868929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.869051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.869078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.869227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.869254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.869406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.869433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.869581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.869609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.869753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.869781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.869903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.869932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.870051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.870079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.870193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.870220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.870376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.870403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.870534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.870561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.870683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.870712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.870858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.870890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.871016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.871050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.871177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.331 [2024-07-15 10:06:18.871204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.331 qpair failed and we were unable to recover it. 00:33:02.331 [2024-07-15 10:06:18.871379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.871407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.871523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.871550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.871667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.871695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.871827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.871854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.871988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.872016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0268000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.872144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.872174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.872299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.872328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.872479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.872507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.872653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.872681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.872824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.872853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.872997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.873026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.873144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.873173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.873325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.873353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.873474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.873502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.873645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.873673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.873816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.873843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.873972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.874001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.874155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.874184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.874333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.874361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.874514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.874542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.874736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.874765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.874881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.874910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.875053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.875082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.875202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.875230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.875352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.875380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.875527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.875559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.875719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.875746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.875874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.875908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.876025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.876054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.876212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.876240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.876355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.876383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.876501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.876530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.876729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.876757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.876937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.876966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.877141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.877168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.877304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.877332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.877473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.877501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.877629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.877657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.877816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.877844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.877976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.878005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.878156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.878184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.878333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.878361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.878476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.878503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.878626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.878653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.878796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.878824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.878952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.878980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.879092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.879120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.879262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.879289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.879412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.879439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.879594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.879623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.879774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.879803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.879922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.879949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.880108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.880137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.880260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.880287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.880428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.880455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.880570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.880598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.880741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.880769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.880912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.880941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.881066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.881094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.881219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.332 [2024-07-15 10:06:18.881246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.332 qpair failed and we were unable to recover it. 00:33:02.332 [2024-07-15 10:06:18.881374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.881402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.881558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.881585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.881713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.881741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.881934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.881963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.882111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.882139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.882256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.882287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.882411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.882438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.882582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.882610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.882752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.882780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.882906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.882934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.883059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.883087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.883205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.883233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.883371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.883400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.883541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.883568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.883680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.883708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.883840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.883868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.884028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.884056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.884236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.884264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.884410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.884439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.884567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.884596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.884748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.884776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.884924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.884952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.885065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.885092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.885291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.885320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.885467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.885495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.885644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.885672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.885813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.885840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.886001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.886028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.886174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.886201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.886375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.886403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.886542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.886570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.886717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.886745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.886866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.886898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.887040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.887066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.887217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.887245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.887357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.887385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.887523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.887552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.887669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.887697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.887809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.887835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.887999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.888027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.888158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.888186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.888312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.888338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.888511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.888539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.888665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.888693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.888834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.888861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.888986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.889019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.889168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.889195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.889313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.889340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.889456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.889485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.889633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.889660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.889805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.889833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.889951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.889979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.890199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.890226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.890402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.890429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.890585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.333 [2024-07-15 10:06:18.890613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.333 qpair failed and we were unable to recover it. 00:33:02.333 [2024-07-15 10:06:18.890735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.890764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.890884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.890913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.891037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.891065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.891213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.891241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.891367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.891395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.891520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.891548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.891668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.891697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.891843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.891871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.891992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.892020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.892141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.892169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.892282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.892310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.892486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.892513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.892627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.892654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.892775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.892803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.892954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.892983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.893103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.893130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.893259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.893288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.893432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.893459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.893580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.893607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.893756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.893785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.893912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.893940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.894099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.894127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.894247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.894275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.894392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.894418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.894566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.894594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.894702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.894729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.894851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.894884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.895012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.895040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.895181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.895208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.895362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.895391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.895517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.895549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.895699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.895726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.895844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.895872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.895994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.896022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.896137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.896165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.896316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.896343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.896476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.896504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.896649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.896676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.896794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.896821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.896956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.896984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.897141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.897168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.897289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.897316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.897486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.897514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.897635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.897664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.897825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.897853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.898036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.898082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.898294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.898322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.898478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.898505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.898621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.898649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.898761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.898788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.898953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.898982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.899107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.899134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.899252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.899280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.899447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.899473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.899584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.899611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.899737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.899764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.899899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.899928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.900070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.900103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.900220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.900248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.900357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.334 [2024-07-15 10:06:18.900384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.334 qpair failed and we were unable to recover it. 00:33:02.334 [2024-07-15 10:06:18.900537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.900564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.900725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.900753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.900906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.900934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.901053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.901081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.901226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.901253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.901435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.901462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.901584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.901613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.901761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.901788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.901923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.901951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.902077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.902104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.902274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.902302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.902453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.902480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.902639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.902667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.902784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.902811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.902964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.902992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.903110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.903137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.903263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.903289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.903431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.903458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.903585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.903612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.903733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.903760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.903895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.903923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.904056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.904083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.904199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.904226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.904365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.904392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.904515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.904547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.904690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.904717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.904865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.904901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.905027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.905054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.905175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.905202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.905347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.905374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.905487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.905514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.905635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.905662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.905776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.905803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.905925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.905953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.906073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.906101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.906229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.906256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.906376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.906403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.906573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.906600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.906721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.906749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.906870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.906904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.907028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.907056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.907212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.907239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.907383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.907410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.907533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.907560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.907684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.907710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.907831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.907859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.907984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.908012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.908159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.908186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.908311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.908338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.908459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.908486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.908659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.908687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.908805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.908832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.908994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.909023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.909149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.909177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.909328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.909356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.909499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.909526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.909647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.909675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.909825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.909852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.909972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.910000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.910164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.910192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.910335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.335 [2024-07-15 10:06:18.910362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.335 qpair failed and we were unable to recover it. 00:33:02.335 [2024-07-15 10:06:18.910494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.910522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.910666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.910693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.910814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.910842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.910970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.910998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.911115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.911147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.911267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.911294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.911407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.911434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.911549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.911576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.911701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.911728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.911836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.911863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.912014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.912041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.912168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.912196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.912314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.912342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.912487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.912514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.912627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.912654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.912797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.912824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.912963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.912991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.913145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.913172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.913286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.913313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.913446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.913473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.913623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.913655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.913774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.913802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.913926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.913954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.914104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.914131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.914249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.914276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.914420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.914447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.914561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.914587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.914738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.914765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.914913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.914941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.915082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.915109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.915226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.915253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.915399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.915430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.915576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.915603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.915721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.915748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.915896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.915924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.916047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.916074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.916195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.916221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.916338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.916365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.916489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.916516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.916672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.916699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.916844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.916871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.917014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.917042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.917159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.917186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.917302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.917329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.917499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.917525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.917644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.917672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.917793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.917822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.917950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.917978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.918095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.918123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.918240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.918267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.918445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.918475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.918588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.918616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.336 [2024-07-15 10:06:18.918776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.336 [2024-07-15 10:06:18.918802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.336 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.918949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.918975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.919118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.919144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.919286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.919312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.919420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.919446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.919567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.919595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.919720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.919747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.919874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.919905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.920033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.920059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.920170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.920196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.920314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.920340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.920490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.920516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.920633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.920660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.920771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.920798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.920974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.921001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.921134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.921160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.921300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.921327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.921450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.921477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.921651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.921679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.921808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.921835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.921974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.922004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.922129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.922155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.922306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.922333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.922474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.922501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.922639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.922666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.922816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.922843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.923017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.923044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.923171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.923199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.923343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.923369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.923490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.923517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.923670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.923696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.923803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.923830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.923955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.923983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.924103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.924137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.924298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.924324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.924438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.924466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.924611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.924638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.924790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.924817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.924967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.924994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.925134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.925160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.925296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.925323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.925441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.925468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.925583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.925609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.925726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.925753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.925866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.925898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.926058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.926085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.926210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.926237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.926356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.926388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.926561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.926587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.926736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.926763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.926886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.926924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.927043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.927069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.927220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.927246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.927399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.927425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.927534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.927560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.927697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.927723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.927864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.927895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.928047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.928073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.928237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.928263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.928418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.928445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.928563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.337 [2024-07-15 10:06:18.928589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.337 qpair failed and we were unable to recover it. 00:33:02.337 [2024-07-15 10:06:18.928735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.928762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.928885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.928912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.929026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.929053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.929184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.929210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.929362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.929389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.929512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.929539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.929711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.929737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.929893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.929920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.930043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.930069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.930212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.930239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.930349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.930375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.930529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.930556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.930667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.930694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.930839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.930866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.930999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.931026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.931147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.931173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.931316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.931343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.931451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.931477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.931627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.931653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.931769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.931795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.931909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.931936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.932077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.932103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.932257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.932283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.932406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.932432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.932566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.932592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.932721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.932747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.932899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.932927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.933049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.933080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.933207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.933233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.933348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.933374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.933504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.933530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.933658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.933684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.933805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.933832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.933968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.933995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.934168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.934195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.934340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.934367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.934514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.934540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.934652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.934678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.934820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.934847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.934972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.934998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.935113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.935144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.935299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.935326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.935472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.935499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.935629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.935655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.935762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.935789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.935922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.935949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.936067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.936095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.936233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.936259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.936403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.936429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.936602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.936629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.936745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.936772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.936914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.936940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.937087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.937113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.937243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.937269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.937397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.937423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.937569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.937607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.937729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.937756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.937871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.937910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.938021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.938047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.938161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.938187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.338 [2024-07-15 10:06:18.938333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.338 [2024-07-15 10:06:18.938359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.338 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.938501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.938527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.938645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.938672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.938826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.938852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.938978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.939005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.939122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.939160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.939297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.939323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.939453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.939480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.939603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.939631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.939748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.939775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.939902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.939930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.940082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.940109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.940238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.940265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.940377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.940403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.940578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.940604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.940726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.940752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.940907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.940934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.941074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.941101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.941243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.941270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.941407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.941433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.941547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.941573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.941719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.941745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.941861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.941892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.942029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.942055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.942165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.942191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.942336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.942363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.942505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.942531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.942650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.942676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.942797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.942824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.942976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.943003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.943130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.943157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.943284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.943311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.943456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.943483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.943631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.943658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.943799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.943826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.943956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.943988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.944149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.944176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.944311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.944338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.944487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.944514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.944670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.944697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.944820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.944847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.944975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.945003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.945177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.945204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.945314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.945341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.945466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.945493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.945621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.945647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.945763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.945789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.945931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.945958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.946096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.946122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.946269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.946296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.946407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.946433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.946570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.946596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.946706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.946732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.946848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.339 [2024-07-15 10:06:18.946873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.339 qpair failed and we were unable to recover it. 00:33:02.339 [2024-07-15 10:06:18.947008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.947035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.947152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.947179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.947351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.947377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.947539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.947565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.947713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.947739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.947906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.947933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.948081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.948107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.948248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.948274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.948384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.948410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.948543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.948569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.948686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.948712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.948826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.948852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.949004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.949032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.949173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.949199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.949351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.949380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.949526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.949553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.949698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.949724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.949841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.949868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.950018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.950044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.950184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.950210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.950354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.950380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.950554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.950581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.950695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.950722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.950830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.950856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.951012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.951040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.951151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.951178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.951286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.951313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.951462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.951490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.951641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.951668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.951785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.951811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.951937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.951965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.952084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.952110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.952282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.952308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.952464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.952490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.952616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.952643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.952793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.952819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.952945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.952993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.953145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.953171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.953326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.953353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.953461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.953487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.953612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.953638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.953752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.953778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.953902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.953929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.954050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.954078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.954207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.954234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.954351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.954378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.954502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.954527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.954703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.954729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.954867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.954900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.955052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.955082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.955204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.955230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.955387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.955413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.955543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.955570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.955688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.955714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.955851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.955884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.956006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.956032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.956153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.956180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.956312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.956339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.956482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.956508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.956624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.956650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.956780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.340 [2024-07-15 10:06:18.956806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.340 qpair failed and we were unable to recover it. 00:33:02.340 [2024-07-15 10:06:18.956961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.956988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.957101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.957127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.957276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.957302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.957443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.957469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.957642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.957668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.957821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.957846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.957983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.958010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.958156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.958183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.958303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.958329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.958437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.958464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.958633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.958659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.958774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.958800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.958935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.958962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.959087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.959113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.959258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.959284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.959415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.959441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.959598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.959624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.959731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.959757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.959864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.959897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.960045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.960072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.960186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.960213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.960356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:02.341 [2024-07-15 10:06:18.960383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:02.341 [2024-07-15 10:06:18.960530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.960557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:02.341 [2024-07-15 10:06:18.960700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.960727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:02.341 [2024-07-15 10:06:18.960857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.960890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.341 [2024-07-15 10:06:18.961037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.961064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.961214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.961240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.961363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.961397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.961523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.961550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.961676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.961702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.961828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.961854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.962003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.962030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.962153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.962179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.962306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.962333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.962484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.962511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.962641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.962668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.962844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.962871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.963052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.963080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.963204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.963230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.963346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.963372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.963514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.963541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.963659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.963685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.963828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.963854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.963986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.964013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.964130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.964157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.964282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.964309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.964429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.964455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.964600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.964627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.964747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.964773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.964902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.964929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.965077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.965104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.965219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.965246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.965421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.965448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.965572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.965598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.965719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.965751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.965872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.965905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.966070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.966096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.966237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.966264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.966415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.341 [2024-07-15 10:06:18.966442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.341 qpair failed and we were unable to recover it. 00:33:02.341 [2024-07-15 10:06:18.966583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.966609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.966722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.966749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.966890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.966918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.967045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.967072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.967217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.967244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.967354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.967380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.967524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.967550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.967692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.967718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.967837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.967863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.967996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.968023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.968138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.968165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.968333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.968359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.968503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.968529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.968638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.968664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.968814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.968840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.968998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.969027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.969147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.969175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.969325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.969352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.969478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.969505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.969656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.969682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.969828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.969854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.969984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.970011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.970156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.970182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.970304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.970331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.970446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.970472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.970583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.970609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.970750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.970778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.970908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.970935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.971057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.971083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.971228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.971254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.971365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.971392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.971505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.971531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.971659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.971686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.971805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.971831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.971969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.971997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.972114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.972140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.972262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.972292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.972416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.972443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.972570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.972596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.972736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.972762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.972903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.972930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.973076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.973101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.973230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.973256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.973381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.973409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.973532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.973559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.973700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.973726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.973874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.973907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.974049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.974076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.974219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.974245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.974393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.974419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.974539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.974565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.974688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.974716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.974857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.974889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.975020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.342 [2024-07-15 10:06:18.975046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.342 qpair failed and we were unable to recover it. 00:33:02.342 [2024-07-15 10:06:18.975168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.975195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.975310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.975336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.975477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.975504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.975628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.975656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.975832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.975859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.975978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.976004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.976123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.976148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.976271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.976297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.976435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.976462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.976576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.976606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.976750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.976778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.976913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.976941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.977092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.977119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.977236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.977263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.977388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.977415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.977548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.977574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.977699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.977725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.977846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.977872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.978024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.978050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.978167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.978195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.978345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.978371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.978489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.978515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.978654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.978680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.978807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.978834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.978965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.978992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.979105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.979131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.343 [2024-07-15 10:06:18.979254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.979280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:02.343 [2024-07-15 10:06:18.979424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.979451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.343 [2024-07-15 10:06:18.979596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 10:06:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.343 [2024-07-15 10:06:18.979623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.979756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.979782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.979908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.979935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.980087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.980114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.980258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.980285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.980427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.980453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.980567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.980592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.980744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.980770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.980910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.980937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.981093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.981119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.981262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.981288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.981430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.981456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.981580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.981606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.981729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.981755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.981872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.981903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.982016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.982042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.982163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.982190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.982307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.982333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.982450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.982476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.982618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.982645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.982787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.982817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.982954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.982981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.983138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.983164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.983287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.983313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.983466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.983492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.983632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.983658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.983784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.983811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.983974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.984002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.984146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.984173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.984297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.984323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.984443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.984471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.984597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.343 [2024-07-15 10:06:18.984623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.343 qpair failed and we were unable to recover it. 00:33:02.343 [2024-07-15 10:06:18.984740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.984768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.984972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.984999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.985129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.985155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.985296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.985322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.985446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.985472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.985627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.985653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.985781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.985807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.985949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.985975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.986097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.986123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.986266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.986291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.986434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.986460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.986602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.986629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.986779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.986805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.986922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.986948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.987065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.987091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.987217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.987243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.987363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.987390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.987504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.987530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.987653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.987679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.987846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.987872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.987999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.988025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.988171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.988197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.988308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.988334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.988451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.988477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.988622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.988648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.988773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.988799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.988944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.988971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.989093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.989119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.989242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.989270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.989425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.989454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.989596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.989622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.989768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.989794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.989938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.989966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.990114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.990140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.990282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.990308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.990450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.990477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.990592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.990620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.990735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.990761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.990902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.990929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.991051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.991077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.991221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.991247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.991356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.991383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.991564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.991590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.991742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.991768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.991886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.991913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.992034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.992060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.992179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.992205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.992327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.992353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.992498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.992524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.992644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.992670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.992795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.992821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.992995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.993022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.993166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.993192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.993340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.993366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.993516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.993543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.993671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.993697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.993811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.993845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.994016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.994044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.994167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.994194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.994311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.994338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.344 qpair failed and we were unable to recover it. 00:33:02.344 [2024-07-15 10:06:18.994455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.344 [2024-07-15 10:06:18.994482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.994608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.994634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.994773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.994799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.994927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.994954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.995115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.995142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.995280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.995306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.995423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.995449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.995605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.995631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.995774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.995801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.995924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.995951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.996072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.996099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.996212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.996239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.996363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.996390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.996514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.996540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.996696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.996723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.996868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.996901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.997043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.997069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.997212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.997239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.997361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.997387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.997537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.997563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.997681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.997707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.997861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.997893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.998010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.998037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.998156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.998184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.998304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.998330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.998462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.998488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.998622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.998648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.998790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.998816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.998959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.998986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.999108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.999134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.999247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.999274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.999421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.999447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.999610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.999636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.999750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.999776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:18.999928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:18.999954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.000098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.000125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.000244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.000271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.000419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.000449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.000588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.000615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.000733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.000759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.000905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.000932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.001087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.001114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 Malloc0 00:33:02.345 [2024-07-15 10:06:19.001244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.001270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.001401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.001428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.001541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.345 [2024-07-15 10:06:19.001567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:02.345 [2024-07-15 10:06:19.001714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.001741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.345 [2024-07-15 10:06:19.001859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.001891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.345 [2024-07-15 10:06:19.002014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.002041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.002165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.002191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.002338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.002368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.002483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.002509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.002622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.002649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.002772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.002799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.345 [2024-07-15 10:06:19.002956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.345 [2024-07-15 10:06:19.002983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.345 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.003128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.003154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.003292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.003319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.003437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.003463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.003590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.003617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.003746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.003772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.003900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.003926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.004047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.004073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.004192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.004218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.004329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.004356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.004498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.004525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.004648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.004674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.004798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.004824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.004951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.004955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.346 [2024-07-15 10:06:19.004978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.005096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.005122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.005247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.005273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.005389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.005416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.005559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.005585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.005712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.005739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.005863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.005894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.006018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.006044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.006153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.006180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.006317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.006343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.006481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.006508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.006675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.006701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.006848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.006874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.007001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.007027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.007145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.007171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.007287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.007313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.007453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.007479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.007627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.007655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.007797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.007823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.007953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.007981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.008124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.008150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.008292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.008319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.008438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.008464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.008589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.008615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.008736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.008763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.008917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.008943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.009059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.009085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.009209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.009235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.009355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.009381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.009520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.009546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.009714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.009740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.009854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.009885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.010001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.010028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.010168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.010195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.010368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.010395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.010562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.010588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.010735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.010761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.010883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.010914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.011033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.011059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.011193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.011220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.011364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.011390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.011537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.011563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.011688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.011716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.011860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.011892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.012068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.012094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.012206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.012232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.012343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.012370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.012519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.012545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.012661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.346 [2024-07-15 10:06:19.012687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.346 qpair failed and we were unable to recover it. 00:33:02.346 [2024-07-15 10:06:19.012813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.012839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.012981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.013008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.347 [2024-07-15 10:06:19.013128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.013155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:02.347 [2024-07-15 10:06:19.013279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.013305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.347 [2024-07-15 10:06:19.013449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.013475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.013603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.013630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.013748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.013774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.013900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.013928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.014082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.014108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.014238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.014264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.014411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.014437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.014554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.014582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.014722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.014749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.014881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.014908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.015047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.015073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.015198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.015224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.015348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.015374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.015488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.015514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.015634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.015660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.015813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.015839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.015968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.015995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.016107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.016133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.016243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.016269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.016425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.016451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.016590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.016616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.016729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.016756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.016872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.016904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.017051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.017080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.017218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.017244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.017391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.017418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.017536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.017562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.017703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.017729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.017853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.017885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.018008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.018036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.018154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.018180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.018303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.018329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.018450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.018476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.018598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.018624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.018749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.018775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.018926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.018953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.019073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.019099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.019234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.019260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.019407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.019433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.019549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.019575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.019726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.019752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.019867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.019900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.020021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.020047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.020198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.020226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.020369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.020395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.020543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.020569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.020680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.020706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.020851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.020882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.020999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.021025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.347 [2024-07-15 10:06:19.021146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.021173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:02.347 [2024-07-15 10:06:19.021307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.021334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.347 [2024-07-15 10:06:19.021456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.021483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.347 [2024-07-15 10:06:19.021596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.021623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.021762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.347 [2024-07-15 10:06:19.021788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.347 qpair failed and we were unable to recover it. 00:33:02.347 [2024-07-15 10:06:19.021937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.021964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.022091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.022117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.022238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.022264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.022383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.022409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.022522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.022548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.022667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.022693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.022815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.022841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.022995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.023022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.023165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.023195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.023343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.023371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.023481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.023508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.023650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.023677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.023820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.023847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.023971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.023998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.024110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.024136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.024291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.024317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.024448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.024474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.024588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.024614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.024743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.024769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.024894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.024922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.025044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.025070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.025177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.025203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 A controller has encountered a failure and is being reset. 00:33:02.348 [2024-07-15 10:06:19.025395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.025437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.025565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.025595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.025720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.025748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.025903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.025931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.026050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.026076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.026234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.026261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.026403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.026430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0258000b90 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.026549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.026577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.026723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.026749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.026913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.026939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.027059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.027085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.027222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.027248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.027357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.027383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.027501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.027532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.027662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.027688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.027833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.027859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c450 with addr=10.0.0.2, port=4420 00:33:02.348 qpair failed and we were unable to recover it. 00:33:02.348 [2024-07-15 10:06:19.028034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.348 [2024-07-15 10:06:19.028082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3a480 with addr=10.0.0.2, port=4420 00:33:02.348 [2024-07-15 10:06:19.028103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3a480 is same with the state(5) to be set 00:33:02.348 [2024-07-15 10:06:19.028128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3a480 (9): Bad file descriptor 00:33:02.348 [2024-07-15 10:06:19.028155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.348 [2024-07-15 10:06:19.028171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.348 [2024-07-15 10:06:19.028187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.348 Unable to reset the controller. 00:33:02.348 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.348 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.348 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.348 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.348 [2024-07-15 10:06:19.033128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.348 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.348 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:02.348 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.348 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.348 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.348 10:06:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2056787 00:33:03.280 Controller properly reset. 00:33:08.549 Initializing NVMe Controllers 00:33:08.549 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:08.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:08.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:08.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:08.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:08.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:08.549 Initialization complete. Launching workers. 00:33:08.549 Starting thread on core 1 00:33:08.549 Starting thread on core 2 00:33:08.549 Starting thread on core 3 00:33:08.549 Starting thread on core 0 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:08.549 00:33:08.549 real 0m10.691s 00:33:08.549 user 0m32.540s 00:33:08.549 sys 0m7.628s 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:08.549 ************************************ 00:33:08.549 END TEST nvmf_target_disconnect_tc2 00:33:08.549 ************************************ 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:08.549 rmmod nvme_tcp 00:33:08.549 rmmod nvme_fabrics 00:33:08.549 rmmod nvme_keyring 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2057312 ']' 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2057312 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2057312 ']' 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2057312 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2057312 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2057312' 00:33:08.549 killing process with pid 2057312 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2057312 00:33:08.549 10:06:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2057312 00:33:08.549 10:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:08.549 10:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:08.549 10:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:08.549 10:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:08.549 10:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:08.549 10:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.549 10:06:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:08.549 10:06:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.076 10:06:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:11.076 00:33:11.076 real 0m15.338s 00:33:11.076 user 0m57.532s 00:33:11.076 sys 0m10.184s 00:33:11.076 10:06:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:11.076 10:06:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:11.076 ************************************ 00:33:11.076 END TEST nvmf_target_disconnect 00:33:11.076 ************************************ 00:33:11.076 10:06:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:11.076 10:06:27 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:33:11.076 10:06:27 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:11.076 10:06:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.076 10:06:27 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:33:11.076 00:33:11.076 real 27m4.313s 00:33:11.076 user 73m53.205s 00:33:11.076 sys 6m27.271s 00:33:11.076 10:06:27 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:11.076 10:06:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.076 ************************************ 00:33:11.076 END TEST nvmf_tcp 00:33:11.076 ************************************ 00:33:11.076 10:06:27 -- common/autotest_common.sh@1142 -- # return 0 00:33:11.076 10:06:27 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:33:11.076 10:06:27 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:11.076 10:06:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:11.076 10:06:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:11.076 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:33:11.076 ************************************ 00:33:11.076 START TEST spdkcli_nvmf_tcp 00:33:11.076 ************************************ 00:33:11.076 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:11.076 * Looking for test storage... 00:33:11.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2058389 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2058389 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2058389 ']' 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.077 [2024-07-15 10:06:27.464400] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:11.077 [2024-07-15 10:06:27.464499] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058389 ] 00:33:11.077 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.077 [2024-07-15 10:06:27.497035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:11.077 [2024-07-15 10:06:27.524073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:11.077 [2024-07-15 10:06:27.610897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.077 [2024-07-15 10:06:27.610901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.077 10:06:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:11.077 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:11.077 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:11.077 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:11.077 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:11.077 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:11.077 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:11.077 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:11.077 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:11.077 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:11.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:11.077 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:11.077 ' 00:33:13.633 [2024-07-15 10:06:30.295881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.007 [2024-07-15 10:06:31.536314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:17.534 [2024-07-15 10:06:33.823298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:19.433 [2024-07-15 10:06:35.797575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:20.829 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:20.829 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:20.829 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:20.829 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:20.829 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:20.829 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:20.829 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:20.829 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:20.829 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:20.829 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:20.829 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:20.829 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:20.829 10:06:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:20.829 10:06:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:20.829 10:06:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:20.829 10:06:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:20.829 10:06:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:20.829 10:06:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:20.829 10:06:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:20.829 10:06:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:21.088 10:06:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:21.088 10:06:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:21.088 10:06:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:21.088 10:06:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:21.088 10:06:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.346 10:06:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:21.346 10:06:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:21.346 10:06:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.346 10:06:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:21.346 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:21.346 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:21.346 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:21.346 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:21.346 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:21.346 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:21.346 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:21.346 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:21.346 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:21.346 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:21.346 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:21.346 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:21.346 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:21.346 ' 00:33:26.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:26.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:26.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:26.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:26.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:26.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:26.616 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:26.616 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:26.616 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:26.616 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:26.616 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:26.616 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:26.616 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:26.616 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2058389 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2058389 ']' 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2058389 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2058389 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2058389' 00:33:26.616 killing process with pid 2058389 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2058389 00:33:26.616 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2058389 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2058389 ']' 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2058389 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2058389 ']' 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2058389 00:33:26.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2058389) - No such process 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2058389 is not found' 00:33:26.874 Process with pid 2058389 is not found 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:26.874 00:33:26.874 real 0m16.063s 00:33:26.874 user 0m34.060s 00:33:26.874 sys 0m0.827s 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:26.874 10:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:26.874 ************************************ 00:33:26.874 END TEST spdkcli_nvmf_tcp 00:33:26.874 ************************************ 00:33:26.874 10:06:43 -- common/autotest_common.sh@1142 -- # return 0 00:33:26.874 10:06:43 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:26.874 10:06:43 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:26.874 10:06:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.874 10:06:43 -- common/autotest_common.sh@10 -- # set +x 00:33:26.874 ************************************ 00:33:26.874 START TEST nvmf_identify_passthru 00:33:26.874 ************************************ 00:33:26.874 10:06:43 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:26.874 * Looking for test storage... 00:33:26.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:26.874 10:06:43 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.874 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.874 10:06:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.874 10:06:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.874 10:06:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.874 10:06:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.874 10:06:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.874 10:06:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.874 10:06:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:26.874 10:06:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:26.875 10:06:43 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.875 10:06:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.875 10:06:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.875 10:06:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.875 10:06:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.875 10:06:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.875 10:06:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.875 10:06:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:26.875 10:06:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.875 10:06:43 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.875 10:06:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:26.875 10:06:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:26.875 10:06:43 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:33:26.875 10:06:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:28.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:28.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:28.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:28.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:28.778 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:28.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:33:28.779 00:33:28.779 --- 10.0.0.2 ping statistics --- 00:33:28.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.779 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:28.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:33:28.779 00:33:28.779 --- 10.0.0.1 ping statistics --- 00:33:28.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.779 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:28.779 10:06:45 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:28.779 10:06:45 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:28.779 10:06:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:28.779 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:29.036 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:29.036 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:33:29.036 10:06:45 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:33:29.036 10:06:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:33:29.036 10:06:45 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:33:29.036 10:06:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:29.036 10:06:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:29.036 10:06:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:29.036 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.232 10:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:33:33.232 10:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:33.232 10:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:33.232 10:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:33.232 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.424 10:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:37.424 10:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:37.424 10:06:53 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:37.424 10:06:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.424 10:06:54 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:37.424 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:37.424 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.424 10:06:54 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2062994 00:33:37.424 10:06:54 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:37.424 10:06:54 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:37.424 10:06:54 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2062994 00:33:37.424 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2062994 ']' 00:33:37.424 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.424 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:37.424 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.424 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:37.424 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.424 [2024-07-15 10:06:54.066839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:37.424 [2024-07-15 10:06:54.066965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:37.424 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.424 [2024-07-15 10:06:54.105708] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:37.424 [2024-07-15 10:06:54.132159] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:37.683 [2024-07-15 10:06:54.218934] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:37.683 [2024-07-15 10:06:54.218995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:37.683 [2024-07-15 10:06:54.219022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:37.683 [2024-07-15 10:06:54.219033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:37.683 [2024-07-15 10:06:54.219043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:37.683 [2024-07-15 10:06:54.219094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.683 [2024-07-15 10:06:54.219156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:37.683 [2024-07-15 10:06:54.219207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:37.683 [2024-07-15 10:06:54.219209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:33:37.683 10:06:54 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.683 INFO: Log level set to 20 00:33:37.683 INFO: Requests: 00:33:37.683 { 00:33:37.683 "jsonrpc": "2.0", 00:33:37.683 "method": "nvmf_set_config", 00:33:37.683 "id": 1, 00:33:37.683 "params": { 00:33:37.683 "admin_cmd_passthru": { 00:33:37.683 "identify_ctrlr": true 00:33:37.683 } 00:33:37.683 } 00:33:37.683 } 00:33:37.683 00:33:37.683 INFO: response: 00:33:37.683 { 00:33:37.683 "jsonrpc": "2.0", 00:33:37.683 "id": 1, 00:33:37.683 "result": true 00:33:37.683 } 00:33:37.683 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.683 10:06:54 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.683 INFO: Setting log level to 20 00:33:37.683 INFO: Setting log level to 20 00:33:37.683 INFO: Log level set to 20 00:33:37.683 INFO: Log level set to 20 00:33:37.683 INFO: Requests: 00:33:37.683 { 00:33:37.683 "jsonrpc": "2.0", 00:33:37.683 "method": "framework_start_init", 00:33:37.683 "id": 1 00:33:37.683 } 00:33:37.683 00:33:37.683 INFO: Requests: 00:33:37.683 { 00:33:37.683 "jsonrpc": "2.0", 00:33:37.683 "method": "framework_start_init", 00:33:37.683 "id": 1 00:33:37.683 } 00:33:37.683 00:33:37.683 [2024-07-15 10:06:54.382089] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:37.683 INFO: response: 00:33:37.683 { 00:33:37.683 "jsonrpc": "2.0", 00:33:37.683 "id": 1, 00:33:37.683 "result": true 00:33:37.683 } 00:33:37.683 00:33:37.683 INFO: response: 00:33:37.683 { 00:33:37.683 "jsonrpc": "2.0", 00:33:37.683 "id": 1, 00:33:37.683 "result": true 00:33:37.683 } 00:33:37.683 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.683 10:06:54 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.683 INFO: Setting log level to 40 00:33:37.683 INFO: Setting log level to 40 00:33:37.683 INFO: Setting log level to 40 00:33:37.683 [2024-07-15 10:06:54.392048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.683 10:06:54 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.683 10:06:54 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.683 10:06:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.977 Nvme0n1 00:33:40.977 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.977 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.978 [2024-07-15 10:06:57.277347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.978 [ 00:33:40.978 { 00:33:40.978 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:40.978 "subtype": "Discovery", 00:33:40.978 "listen_addresses": [], 00:33:40.978 "allow_any_host": true, 00:33:40.978 "hosts": [] 00:33:40.978 }, 00:33:40.978 { 00:33:40.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:40.978 "subtype": "NVMe", 00:33:40.978 "listen_addresses": [ 00:33:40.978 { 00:33:40.978 "trtype": "TCP", 00:33:40.978 "adrfam": "IPv4", 00:33:40.978 "traddr": "10.0.0.2", 00:33:40.978 "trsvcid": "4420" 00:33:40.978 } 00:33:40.978 ], 00:33:40.978 "allow_any_host": true, 00:33:40.978 "hosts": [], 00:33:40.978 "serial_number": "SPDK00000000000001", 00:33:40.978 "model_number": "SPDK bdev Controller", 00:33:40.978 "max_namespaces": 1, 00:33:40.978 "min_cntlid": 1, 00:33:40.978 "max_cntlid": 65519, 00:33:40.978 "namespaces": [ 00:33:40.978 { 00:33:40.978 "nsid": 1, 00:33:40.978 "bdev_name": "Nvme0n1", 00:33:40.978 "name": "Nvme0n1", 00:33:40.978 "nguid": "E77FAE6F4A9149D48DDB45A69B3239CF", 00:33:40.978 "uuid": "e77fae6f-4a91-49d4-8ddb-45a69b3239cf" 00:33:40.978 } 00:33:40.978 ] 00:33:40.978 } 00:33:40.978 ] 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:40.978 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:40.978 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:40.978 10:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:40.978 rmmod nvme_tcp 00:33:40.978 rmmod nvme_fabrics 00:33:40.978 rmmod nvme_keyring 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2062994 ']' 00:33:40.978 10:06:57 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2062994 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2062994 ']' 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2062994 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2062994 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2062994' 00:33:40.978 killing process with pid 2062994 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2062994 00:33:40.978 10:06:57 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2062994 00:33:42.356 10:06:59 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:42.356 10:06:59 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:42.356 10:06:59 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:42.356 10:06:59 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:42.356 10:06:59 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:42.356 10:06:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.356 10:06:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:42.356 10:06:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.911 10:07:01 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:44.911 00:33:44.911 real 0m17.719s 00:33:44.911 user 0m26.080s 00:33:44.911 sys 0m2.194s 00:33:44.911 10:07:01 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:44.911 10:07:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:44.911 ************************************ 00:33:44.911 END TEST nvmf_identify_passthru 00:33:44.911 ************************************ 00:33:44.911 10:07:01 -- common/autotest_common.sh@1142 -- # return 0 00:33:44.911 10:07:01 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:44.911 10:07:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:44.911 10:07:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:44.911 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:33:44.911 ************************************ 00:33:44.911 START TEST nvmf_dif 00:33:44.911 ************************************ 00:33:44.911 10:07:01 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:44.911 * Looking for test storage... 00:33:44.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:44.911 10:07:01 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.911 10:07:01 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.911 10:07:01 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.911 10:07:01 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.911 10:07:01 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.911 10:07:01 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.911 10:07:01 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.911 10:07:01 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:44.911 10:07:01 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:44.911 10:07:01 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:44.911 10:07:01 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:44.911 10:07:01 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:44.911 10:07:01 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:44.911 10:07:01 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.911 10:07:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:44.911 10:07:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:44.911 10:07:01 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:33:44.911 10:07:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:46.813 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:46.813 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:46.813 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:46.813 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:46.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:33:46.813 00:33:46.813 --- 10.0.0.2 ping statistics --- 00:33:46.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.813 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:33:46.813 00:33:46.813 --- 10.0.0.1 ping statistics --- 00:33:46.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.813 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:46.813 10:07:03 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:47.744 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:47.744 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:47.744 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:47.744 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:47.744 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:47.744 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:47.744 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:47.744 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:47.744 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:47.744 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:47.744 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:47.744 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:47.744 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:47.744 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:47.744 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:47.744 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:47.744 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:47.744 10:07:04 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.745 10:07:04 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:47.745 10:07:04 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:47.745 10:07:04 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.745 10:07:04 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:47.745 10:07:04 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:48.003 10:07:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:48.003 10:07:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:48.003 10:07:04 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:48.003 10:07:04 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:48.003 10:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.003 10:07:04 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2066135 00:33:48.003 10:07:04 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:48.003 10:07:04 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2066135 00:33:48.003 10:07:04 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2066135 ']' 00:33:48.003 10:07:04 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.003 10:07:04 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:48.003 10:07:04 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.003 10:07:04 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:48.003 10:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.003 [2024-07-15 10:07:04.599350] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:48.003 [2024-07-15 10:07:04.599431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.003 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.003 [2024-07-15 10:07:04.644088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:48.003 [2024-07-15 10:07:04.671502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.003 [2024-07-15 10:07:04.758586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.003 [2024-07-15 10:07:04.758663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.003 [2024-07-15 10:07:04.758676] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.003 [2024-07-15 10:07:04.758687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.003 [2024-07-15 10:07:04.758696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.003 [2024-07-15 10:07:04.758743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.260 10:07:04 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:48.260 10:07:04 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:33:48.260 10:07:04 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:48.260 10:07:04 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:48.260 10:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.260 10:07:04 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.260 10:07:04 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:48.260 10:07:04 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:48.260 10:07:04 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.260 10:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.260 [2024-07-15 10:07:04.905245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.260 10:07:04 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.260 10:07:04 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:48.260 10:07:04 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:48.260 10:07:04 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:48.260 10:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.260 ************************************ 00:33:48.260 START TEST fio_dif_1_default 00:33:48.260 ************************************ 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:48.260 bdev_null0 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:48.260 [2024-07-15 10:07:04.965564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:48.260 { 00:33:48.260 "params": { 00:33:48.260 "name": "Nvme$subsystem", 00:33:48.260 "trtype": "$TEST_TRANSPORT", 00:33:48.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:48.260 "adrfam": "ipv4", 00:33:48.260 "trsvcid": "$NVMF_PORT", 00:33:48.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:48.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:48.260 "hdgst": ${hdgst:-false}, 00:33:48.260 "ddgst": ${ddgst:-false} 00:33:48.260 }, 00:33:48.260 "method": "bdev_nvme_attach_controller" 00:33:48.260 } 00:33:48.260 EOF 00:33:48.260 )") 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:48.260 "params": { 00:33:48.260 "name": "Nvme0", 00:33:48.260 "trtype": "tcp", 00:33:48.260 "traddr": "10.0.0.2", 00:33:48.260 "adrfam": "ipv4", 00:33:48.260 "trsvcid": "4420", 00:33:48.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:48.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:48.260 "hdgst": false, 00:33:48.260 "ddgst": false 00:33:48.260 }, 00:33:48.260 "method": "bdev_nvme_attach_controller" 00:33:48.260 }' 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.260 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:48.261 10:07:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:48.261 10:07:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:48.261 10:07:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:48.261 10:07:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:48.261 10:07:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.517 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:48.517 fio-3.35 00:33:48.517 Starting 1 thread 00:33:48.517 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.713 00:34:00.713 filename0: (groupid=0, jobs=1): err= 0: pid=2066363: Mon Jul 15 10:07:15 2024 00:34:00.713 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:34:00.713 slat (nsec): min=6609, max=53519, avg=9095.75, stdev=3597.02 00:34:00.713 clat (usec): min=696, max=47665, avg=21030.20, stdev=20204.54 00:34:00.713 lat (usec): min=703, max=47692, avg=21039.29, stdev=20204.49 00:34:00.713 clat percentiles (usec): 00:34:00.713 | 1.00th=[ 742], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 783], 00:34:00.713 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[41157], 60.00th=[41157], 00:34:00.713 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:00.713 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:34:00.713 | 99.99th=[47449] 00:34:00.713 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=20.18, samples=19 00:34:00.713 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:34:00.713 lat (usec) : 750=2.58%, 1000=47.26% 00:34:00.713 lat (msec) : 2=0.05%, 50=50.11% 00:34:00.713 cpu : usr=89.80%, sys=9.93%, ctx=16, majf=0, minf=233 00:34:00.713 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.713 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.713 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:00.713 00:34:00.713 Run status group 0 (all jobs): 00:34:00.713 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10003-10003msec 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.713 00:34:00.713 real 0m11.125s 00:34:00.713 user 0m10.045s 00:34:00.713 sys 0m1.297s 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:00.713 ************************************ 00:34:00.713 END TEST fio_dif_1_default 00:34:00.713 ************************************ 00:34:00.713 10:07:16 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:00.713 10:07:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:00.713 10:07:16 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:00.713 10:07:16 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:00.713 10:07:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:00.713 ************************************ 00:34:00.713 START TEST fio_dif_1_multi_subsystems 00:34:00.713 ************************************ 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.713 bdev_null0 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:00.713 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.714 [2024-07-15 10:07:16.143377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.714 bdev_null1 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:00.714 { 00:34:00.714 "params": { 00:34:00.714 "name": "Nvme$subsystem", 00:34:00.714 "trtype": "$TEST_TRANSPORT", 00:34:00.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:00.714 "adrfam": "ipv4", 00:34:00.714 "trsvcid": "$NVMF_PORT", 00:34:00.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:00.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:00.714 "hdgst": ${hdgst:-false}, 00:34:00.714 "ddgst": ${ddgst:-false} 00:34:00.714 }, 00:34:00.714 "method": "bdev_nvme_attach_controller" 00:34:00.714 } 00:34:00.714 EOF 00:34:00.714 )") 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:00.714 { 00:34:00.714 "params": { 00:34:00.714 "name": "Nvme$subsystem", 00:34:00.714 "trtype": "$TEST_TRANSPORT", 00:34:00.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:00.714 "adrfam": "ipv4", 00:34:00.714 "trsvcid": "$NVMF_PORT", 00:34:00.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:00.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:00.714 "hdgst": ${hdgst:-false}, 00:34:00.714 "ddgst": ${ddgst:-false} 00:34:00.714 }, 00:34:00.714 "method": "bdev_nvme_attach_controller" 00:34:00.714 } 00:34:00.714 EOF 00:34:00.714 )") 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:00.714 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:00.714 "params": { 00:34:00.714 "name": "Nvme0", 00:34:00.714 "trtype": "tcp", 00:34:00.714 "traddr": "10.0.0.2", 00:34:00.714 "adrfam": "ipv4", 00:34:00.714 "trsvcid": "4420", 00:34:00.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:00.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:00.714 "hdgst": false, 00:34:00.715 "ddgst": false 00:34:00.715 }, 00:34:00.715 "method": "bdev_nvme_attach_controller" 00:34:00.715 },{ 00:34:00.715 "params": { 00:34:00.715 "name": "Nvme1", 00:34:00.715 "trtype": "tcp", 00:34:00.715 "traddr": "10.0.0.2", 00:34:00.715 "adrfam": "ipv4", 00:34:00.715 "trsvcid": "4420", 00:34:00.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:00.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:00.715 "hdgst": false, 00:34:00.715 "ddgst": false 00:34:00.715 }, 00:34:00.715 "method": "bdev_nvme_attach_controller" 00:34:00.715 }' 00:34:00.715 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:00.715 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:00.715 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:00.715 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.715 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:00.715 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:00.715 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:00.715 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:00.715 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:00.715 10:07:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.715 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:00.715 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:00.715 fio-3.35 00:34:00.715 Starting 2 threads 00:34:00.715 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.680 00:34:10.680 filename0: (groupid=0, jobs=1): err= 0: pid=2067769: Mon Jul 15 10:07:27 2024 00:34:10.680 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:34:10.680 slat (nsec): min=5232, max=34263, avg=9219.79, stdev=3279.65 00:34:10.680 clat (usec): min=677, max=46002, avg=21068.07, stdev=20189.48 00:34:10.680 lat (usec): min=684, max=46022, avg=21077.29, stdev=20189.57 00:34:10.680 clat percentiles (usec): 00:34:10.680 | 1.00th=[ 758], 5.00th=[ 775], 10.00th=[ 783], 20.00th=[ 799], 00:34:10.680 | 30.00th=[ 807], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:34:10.680 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:10.680 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:34:10.680 | 99.99th=[45876] 00:34:10.680 bw ( KiB/s): min= 672, max= 768, per=50.05%, avg=759.58, stdev=25.78, samples=19 00:34:10.680 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:34:10.680 lat (usec) : 750=0.47%, 1000=49.31% 00:34:10.680 lat (msec) : 50=50.21% 00:34:10.680 cpu : usr=94.39%, sys=5.31%, ctx=17, majf=0, minf=189 00:34:10.680 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.680 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.680 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:10.680 filename1: (groupid=0, jobs=1): err= 0: pid=2067770: Mon Jul 15 10:07:27 2024 00:34:10.680 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:34:10.680 slat (nsec): min=4846, max=31047, avg=9081.69, stdev=2629.40 00:34:10.680 clat (usec): min=768, max=46002, avg=21073.03, stdev=20133.26 00:34:10.680 lat (usec): min=775, max=46015, avg=21082.11, stdev=20133.12 00:34:10.680 clat percentiles (usec): 00:34:10.680 | 1.00th=[ 791], 5.00th=[ 807], 10.00th=[ 824], 20.00th=[ 857], 00:34:10.680 | 30.00th=[ 865], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:34:10.680 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:10.680 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:34:10.680 | 99.99th=[45876] 00:34:10.680 bw ( KiB/s): min= 672, max= 768, per=50.05%, avg=759.58, stdev=25.78, samples=19 00:34:10.680 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:34:10.680 lat (usec) : 1000=49.37% 00:34:10.680 lat (msec) : 2=0.42%, 50=50.21% 00:34:10.680 cpu : usr=94.75%, sys=4.96%, ctx=13, majf=0, minf=73 00:34:10.680 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.680 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.680 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:10.680 00:34:10.680 Run status group 0 (all jobs): 00:34:10.680 READ: bw=1516KiB/s (1553kB/s), 758KiB/s-758KiB/s (776kB/s-777kB/s), io=14.8MiB (15.5MB), run=10001-10002msec 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.938 00:34:10.938 real 0m11.438s 00:34:10.938 user 0m20.356s 00:34:10.938 sys 0m1.327s 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:10.938 10:07:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:10.938 ************************************ 00:34:10.938 END TEST fio_dif_1_multi_subsystems 00:34:10.938 ************************************ 00:34:10.938 10:07:27 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:10.938 10:07:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:10.938 10:07:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:10.938 10:07:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:10.938 10:07:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.938 ************************************ 00:34:10.938 START TEST fio_dif_rand_params 00:34:10.938 ************************************ 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.938 bdev_null0 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:10.938 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.939 [2024-07-15 10:07:27.625102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.939 { 00:34:10.939 "params": { 00:34:10.939 "name": "Nvme$subsystem", 00:34:10.939 "trtype": "$TEST_TRANSPORT", 00:34:10.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.939 "adrfam": "ipv4", 00:34:10.939 "trsvcid": "$NVMF_PORT", 00:34:10.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.939 "hdgst": ${hdgst:-false}, 00:34:10.939 "ddgst": ${ddgst:-false} 00:34:10.939 }, 00:34:10.939 "method": "bdev_nvme_attach_controller" 00:34:10.939 } 00:34:10.939 EOF 00:34:10.939 )") 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:10.939 "params": { 00:34:10.939 "name": "Nvme0", 00:34:10.939 "trtype": "tcp", 00:34:10.939 "traddr": "10.0.0.2", 00:34:10.939 "adrfam": "ipv4", 00:34:10.939 "trsvcid": "4420", 00:34:10.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.939 "hdgst": false, 00:34:10.939 "ddgst": false 00:34:10.939 }, 00:34:10.939 "method": "bdev_nvme_attach_controller" 00:34:10.939 }' 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:10.939 10:07:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.199 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:11.199 ... 00:34:11.199 fio-3.35 00:34:11.199 Starting 3 threads 00:34:11.199 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.780 00:34:17.780 filename0: (groupid=0, jobs=1): err= 0: pid=2069163: Mon Jul 15 10:07:33 2024 00:34:17.780 read: IOPS=193, BW=24.2MiB/s (25.3MB/s)(122MiB/5046msec) 00:34:17.780 slat (nsec): min=4605, max=53743, avg=15405.88, stdev=5861.96 00:34:17.780 clat (usec): min=5049, max=58633, avg=15463.39, stdev=12633.68 00:34:17.780 lat (usec): min=5064, max=58646, avg=15478.80, stdev=12633.84 00:34:17.780 clat percentiles (usec): 00:34:17.780 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 8717], 00:34:17.780 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[12125], 60.00th=[13042], 00:34:17.780 | 70.00th=[13960], 80.00th=[15401], 90.00th=[46924], 95.00th=[51119], 00:34:17.780 | 99.00th=[55837], 99.50th=[56361], 99.90th=[58459], 99.95th=[58459], 00:34:17.780 | 99.99th=[58459] 00:34:17.780 bw ( KiB/s): min=16929, max=30976, per=32.45%, avg=24886.50, stdev=4738.55, samples=10 00:34:17.780 iops : min= 132, max= 242, avg=194.40, stdev=37.07, samples=10 00:34:17.780 lat (msec) : 10=34.77%, 20=54.87%, 50=3.49%, 100=6.87% 00:34:17.780 cpu : usr=92.21%, sys=7.33%, ctx=23, majf=0, minf=64 00:34:17.780 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.780 issued rwts: total=975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.780 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.780 filename0: (groupid=0, jobs=1): err= 0: pid=2069164: Mon Jul 15 10:07:33 2024 00:34:17.780 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(128MiB/5046msec) 00:34:17.780 slat (nsec): min=5062, max=79368, avg=13583.60, stdev=4844.69 00:34:17.780 clat (usec): min=5195, max=92995, avg=14781.82, stdev=12276.54 00:34:17.780 lat (usec): min=5207, max=93008, avg=14795.40, stdev=12276.93 00:34:17.780 clat percentiles (usec): 00:34:17.780 | 1.00th=[ 5669], 5.00th=[ 6259], 10.00th=[ 7635], 20.00th=[ 8717], 00:34:17.780 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[11469], 60.00th=[12518], 00:34:17.780 | 70.00th=[13698], 80.00th=[14877], 90.00th=[17171], 95.00th=[50070], 00:34:17.780 | 99.00th=[53740], 99.50th=[55837], 99.90th=[91751], 99.95th=[92799], 00:34:17.780 | 99.99th=[92799] 00:34:17.780 bw ( KiB/s): min=13568, max=33280, per=33.95%, avg=26035.20, stdev=6242.82, samples=10 00:34:17.780 iops : min= 106, max= 260, avg=203.40, stdev=48.77, samples=10 00:34:17.780 lat (msec) : 10=37.55%, 20=53.53%, 50=3.53%, 100=5.39% 00:34:17.780 cpu : usr=91.34%, sys=8.23%, ctx=18, majf=0, minf=184 00:34:17.780 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.780 issued rwts: total=1020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.780 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.780 filename0: (groupid=0, jobs=1): err= 0: pid=2069165: Mon Jul 15 10:07:33 2024 00:34:17.780 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(129MiB/5026msec) 00:34:17.780 slat (nsec): min=4631, max=41887, avg=13737.89, stdev=4094.22 00:34:17.780 clat (usec): min=5220, max=89287, avg=14647.14, stdev=11751.54 00:34:17.780 lat (usec): min=5231, max=89301, avg=14660.88, stdev=11751.62 00:34:17.780 clat percentiles (usec): 00:34:17.780 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 8979], 00:34:17.780 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11600], 60.00th=[12518], 00:34:17.780 | 70.00th=[13304], 80.00th=[14353], 90.00th=[17695], 95.00th=[50070], 00:34:17.780 | 99.00th=[54264], 99.50th=[56886], 99.90th=[57934], 99.95th=[89654], 00:34:17.780 | 99.99th=[89654] 00:34:17.780 bw ( KiB/s): min=19712, max=33280, per=34.21%, avg=26233.60, stdev=5531.42, samples=10 00:34:17.780 iops : min= 154, max= 260, avg=204.90, stdev=43.16, samples=10 00:34:17.780 lat (msec) : 10=32.39%, 20=58.75%, 50=3.70%, 100=5.16% 00:34:17.780 cpu : usr=90.65%, sys=8.90%, ctx=17, majf=0, minf=77 00:34:17.780 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.780 issued rwts: total=1028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.780 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.780 00:34:17.780 Run status group 0 (all jobs): 00:34:17.780 READ: bw=74.9MiB/s (78.5MB/s), 24.2MiB/s-25.6MiB/s (25.3MB/s-26.8MB/s), io=378MiB (396MB), run=5026-5046msec 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.780 bdev_null0 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.780 [2024-07-15 10:07:33.890337] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.780 bdev_null1 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.780 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.781 bdev_null2 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:17.781 { 00:34:17.781 "params": { 00:34:17.781 "name": "Nvme$subsystem", 00:34:17.781 "trtype": "$TEST_TRANSPORT", 00:34:17.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.781 "adrfam": "ipv4", 00:34:17.781 "trsvcid": "$NVMF_PORT", 00:34:17.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.781 "hdgst": ${hdgst:-false}, 00:34:17.781 "ddgst": ${ddgst:-false} 00:34:17.781 }, 00:34:17.781 "method": "bdev_nvme_attach_controller" 00:34:17.781 } 00:34:17.781 EOF 00:34:17.781 )") 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:17.781 { 00:34:17.781 "params": { 00:34:17.781 "name": "Nvme$subsystem", 00:34:17.781 "trtype": "$TEST_TRANSPORT", 00:34:17.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.781 "adrfam": "ipv4", 00:34:17.781 "trsvcid": "$NVMF_PORT", 00:34:17.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.781 "hdgst": ${hdgst:-false}, 00:34:17.781 "ddgst": ${ddgst:-false} 00:34:17.781 }, 00:34:17.781 "method": "bdev_nvme_attach_controller" 00:34:17.781 } 00:34:17.781 EOF 00:34:17.781 )") 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:17.781 { 00:34:17.781 "params": { 00:34:17.781 "name": "Nvme$subsystem", 00:34:17.781 "trtype": "$TEST_TRANSPORT", 00:34:17.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.781 "adrfam": "ipv4", 00:34:17.781 "trsvcid": "$NVMF_PORT", 00:34:17.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.781 "hdgst": ${hdgst:-false}, 00:34:17.781 "ddgst": ${ddgst:-false} 00:34:17.781 }, 00:34:17.781 "method": "bdev_nvme_attach_controller" 00:34:17.781 } 00:34:17.781 EOF 00:34:17.781 )") 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:17.781 "params": { 00:34:17.781 "name": "Nvme0", 00:34:17.781 "trtype": "tcp", 00:34:17.781 "traddr": "10.0.0.2", 00:34:17.781 "adrfam": "ipv4", 00:34:17.781 "trsvcid": "4420", 00:34:17.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:17.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:17.781 "hdgst": false, 00:34:17.781 "ddgst": false 00:34:17.781 }, 00:34:17.781 "method": "bdev_nvme_attach_controller" 00:34:17.781 },{ 00:34:17.781 "params": { 00:34:17.781 "name": "Nvme1", 00:34:17.781 "trtype": "tcp", 00:34:17.781 "traddr": "10.0.0.2", 00:34:17.781 "adrfam": "ipv4", 00:34:17.781 "trsvcid": "4420", 00:34:17.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.781 "hdgst": false, 00:34:17.781 "ddgst": false 00:34:17.781 }, 00:34:17.781 "method": "bdev_nvme_attach_controller" 00:34:17.781 },{ 00:34:17.781 "params": { 00:34:17.781 "name": "Nvme2", 00:34:17.781 "trtype": "tcp", 00:34:17.781 "traddr": "10.0.0.2", 00:34:17.781 "adrfam": "ipv4", 00:34:17.781 "trsvcid": "4420", 00:34:17.781 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:17.781 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:17.781 "hdgst": false, 00:34:17.781 "ddgst": false 00:34:17.781 }, 00:34:17.781 "method": "bdev_nvme_attach_controller" 00:34:17.781 }' 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:17.781 10:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:17.781 10:07:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:17.781 10:07:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:17.781 10:07:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:17.781 10:07:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.781 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:17.781 ... 00:34:17.781 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:17.781 ... 00:34:17.781 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:17.781 ... 00:34:17.781 fio-3.35 00:34:17.781 Starting 24 threads 00:34:17.781 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.981 00:34:29.981 filename0: (groupid=0, jobs=1): err= 0: pid=2070023: Mon Jul 15 10:07:45 2024 00:34:29.981 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10024msec) 00:34:29.981 slat (usec): min=13, max=121, avg=44.39, stdev=13.85 00:34:29.981 clat (usec): min=25863, max=46068, avg=33120.82, stdev=1332.80 00:34:29.981 lat (usec): min=25913, max=46109, avg=33165.21, stdev=1332.95 00:34:29.981 clat percentiles (usec): 00:34:29.981 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:34:29.981 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:29.981 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.981 | 99.00th=[41681], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:34:29.981 | 99.99th=[45876] 00:34:29.981 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1907.20, stdev=39.40, samples=20 00:34:29.981 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:34:29.981 lat (msec) : 50=100.00% 00:34:29.981 cpu : usr=97.65%, sys=1.84%, ctx=27, majf=0, minf=58 00:34:29.981 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:29.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.981 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.981 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.981 filename0: (groupid=0, jobs=1): err= 0: pid=2070024: Mon Jul 15 10:07:45 2024 00:34:29.981 read: IOPS=476, BW=1905KiB/s (1950kB/s)(18.6MiB/10013msec) 00:34:29.981 slat (nsec): min=7970, max=51201, avg=15434.89, stdev=7325.99 00:34:29.981 clat (usec): min=26483, max=64604, avg=33468.60, stdev=2052.41 00:34:29.981 lat (usec): min=26506, max=64639, avg=33484.04, stdev=2052.72 00:34:29.981 clat percentiles (usec): 00:34:29.981 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:34:29.981 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:29.981 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:34:29.981 | 99.00th=[41157], 99.50th=[42206], 99.90th=[64226], 99.95th=[64750], 00:34:29.981 | 99.99th=[64750] 00:34:29.981 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1900.80, stdev=62.64, samples=20 00:34:29.981 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:34:29.981 lat (msec) : 50=99.66%, 100=0.34% 00:34:29.981 cpu : usr=98.34%, sys=1.27%, ctx=16, majf=0, minf=47 00:34:29.981 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:29.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.981 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.981 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.981 filename0: (groupid=0, jobs=1): err= 0: pid=2070025: Mon Jul 15 10:07:45 2024 00:34:29.981 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10009msec) 00:34:29.981 slat (usec): min=5, max=140, avg=46.53, stdev=18.79 00:34:29.981 clat (usec): min=25918, max=90960, avg=33251.11, stdev=3517.21 00:34:29.981 lat (usec): min=25942, max=90977, avg=33297.65, stdev=3515.44 00:34:29.981 clat percentiles (usec): 00:34:29.981 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:34:29.981 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:29.981 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.981 | 99.00th=[41681], 99.50th=[43779], 99.90th=[90702], 99.95th=[90702], 00:34:29.981 | 99.99th=[90702] 00:34:29.981 bw ( KiB/s): min= 1664, max= 1920, per=4.13%, avg=1893.05, stdev=68.52, samples=19 00:34:29.981 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:34:29.981 lat (msec) : 50=99.66%, 100=0.34% 00:34:29.981 cpu : usr=96.98%, sys=1.87%, ctx=159, majf=0, minf=49 00:34:29.981 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:29.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.981 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.981 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.981 filename0: (groupid=0, jobs=1): err= 0: pid=2070026: Mon Jul 15 10:07:45 2024 00:34:29.981 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10024msec) 00:34:29.981 slat (usec): min=8, max=124, avg=44.74, stdev=16.29 00:34:29.981 clat (usec): min=24125, max=58698, avg=33096.02, stdev=1480.97 00:34:29.981 lat (usec): min=24133, max=58743, avg=33140.76, stdev=1482.03 00:34:29.981 clat percentiles (usec): 00:34:29.981 | 1.00th=[30540], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:34:29.981 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:29.981 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.981 | 99.00th=[41681], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:34:29.981 | 99.99th=[58459] 00:34:29.981 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1907.20, stdev=39.40, samples=20 00:34:29.981 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:34:29.981 lat (msec) : 50=99.96%, 100=0.04% 00:34:29.981 cpu : usr=96.76%, sys=2.01%, ctx=59, majf=0, minf=37 00:34:29.981 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:29.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.981 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.981 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.981 filename0: (groupid=0, jobs=1): err= 0: pid=2070027: Mon Jul 15 10:07:45 2024 00:34:29.981 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10027msec) 00:34:29.981 slat (usec): min=5, max=144, avg=34.02, stdev=17.15 00:34:29.981 clat (usec): min=6327, max=43800, avg=33054.94, stdev=2437.97 00:34:29.981 lat (usec): min=6348, max=43818, avg=33088.96, stdev=2437.63 00:34:29.981 clat percentiles (usec): 00:34:29.982 | 1.00th=[24249], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:34:29.982 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:29.982 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.982 | 99.00th=[39584], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:34:29.982 | 99.99th=[43779] 00:34:29.982 bw ( KiB/s): min= 1792, max= 2048, per=4.19%, avg=1920.00, stdev=41.53, samples=20 00:34:29.982 iops : min= 448, max= 512, avg=480.00, stdev=10.38, samples=20 00:34:29.982 lat (msec) : 10=0.58%, 20=0.19%, 50=99.23% 00:34:29.982 cpu : usr=98.18%, sys=1.38%, ctx=28, majf=0, minf=58 00:34:29.982 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename0: (groupid=0, jobs=1): err= 0: pid=2070028: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10009msec) 00:34:29.982 slat (usec): min=7, max=112, avg=41.69, stdev=21.45 00:34:29.982 clat (usec): min=15667, max=81692, avg=33318.69, stdev=3235.40 00:34:29.982 lat (usec): min=15690, max=81712, avg=33360.38, stdev=3233.13 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:29.982 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:29.982 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.982 | 99.00th=[43254], 99.50th=[46400], 99.90th=[81265], 99.95th=[81265], 00:34:29.982 | 99.99th=[81265] 00:34:29.982 bw ( KiB/s): min= 1664, max= 1920, per=4.13%, avg=1893.05, stdev=68.52, samples=19 00:34:29.982 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:34:29.982 lat (msec) : 20=0.04%, 50=99.54%, 100=0.42% 00:34:29.982 cpu : usr=98.05%, sys=1.48%, ctx=21, majf=0, minf=55 00:34:29.982 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename0: (groupid=0, jobs=1): err= 0: pid=2070029: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10011msec) 00:34:29.982 slat (usec): min=5, max=106, avg=42.25, stdev=21.15 00:34:29.982 clat (usec): min=15711, max=66369, avg=33211.41, stdev=2287.08 00:34:29.982 lat (usec): min=15722, max=66385, avg=33253.66, stdev=2285.53 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:34:29.982 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:29.982 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.982 | 99.00th=[42730], 99.50th=[43254], 99.90th=[66323], 99.95th=[66323], 00:34:29.982 | 99.99th=[66323] 00:34:29.982 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1900.80, stdev=62.64, samples=20 00:34:29.982 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:34:29.982 lat (msec) : 20=0.04%, 50=99.58%, 100=0.38% 00:34:29.982 cpu : usr=98.40%, sys=1.19%, ctx=17, majf=0, minf=29 00:34:29.982 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename0: (groupid=0, jobs=1): err= 0: pid=2070030: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10005msec) 00:34:29.982 slat (usec): min=8, max=122, avg=52.41, stdev=25.54 00:34:29.982 clat (usec): min=13626, max=61874, avg=33100.78, stdev=2206.25 00:34:29.982 lat (usec): min=13649, max=61948, avg=33153.19, stdev=2202.40 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:29.982 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:29.982 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.982 | 99.00th=[42206], 99.50th=[43779], 99.90th=[61604], 99.95th=[61604], 00:34:29.982 | 99.99th=[62129] 00:34:29.982 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1899.79, stdev=47.95, samples=19 00:34:29.982 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:34:29.982 lat (msec) : 20=0.08%, 50=99.50%, 100=0.42% 00:34:29.982 cpu : usr=91.59%, sys=4.06%, ctx=209, majf=0, minf=56 00:34:29.982 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename1: (groupid=0, jobs=1): err= 0: pid=2070031: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10023msec) 00:34:29.982 slat (usec): min=5, max=136, avg=52.27, stdev=21.76 00:34:29.982 clat (usec): min=6204, max=43836, avg=32835.52, stdev=2584.85 00:34:29.982 lat (usec): min=6213, max=43870, avg=32887.79, stdev=2586.93 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[27395], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:29.982 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:29.982 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.982 | 99.00th=[39584], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:34:29.982 | 99.99th=[43779] 00:34:29.982 bw ( KiB/s): min= 1792, max= 2176, per=4.19%, avg=1920.00, stdev=71.93, samples=20 00:34:29.982 iops : min= 448, max= 544, avg=480.00, stdev=17.98, samples=20 00:34:29.982 lat (msec) : 10=0.48%, 20=0.52%, 50=99.00% 00:34:29.982 cpu : usr=95.10%, sys=2.72%, ctx=298, majf=0, minf=54 00:34:29.982 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename1: (groupid=0, jobs=1): err= 0: pid=2070032: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=475, BW=1903KiB/s (1949kB/s)(18.6MiB/10021msec) 00:34:29.982 slat (nsec): min=3907, max=73094, avg=31710.62, stdev=9551.93 00:34:29.982 clat (usec): min=14210, max=76732, avg=33343.52, stdev=2803.86 00:34:29.982 lat (usec): min=14221, max=76744, avg=33375.23, stdev=2802.56 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:34:29.982 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:34:29.982 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.982 | 99.00th=[42730], 99.50th=[43254], 99.90th=[77071], 99.95th=[77071], 00:34:29.982 | 99.99th=[77071] 00:34:29.982 bw ( KiB/s): min= 1667, max= 2032, per=4.14%, avg=1898.65, stdev=72.95, samples=20 00:34:29.982 iops : min= 416, max= 508, avg=474.60, stdev=18.37, samples=20 00:34:29.982 lat (msec) : 20=0.04%, 50=99.58%, 100=0.38% 00:34:29.982 cpu : usr=96.84%, sys=2.24%, ctx=219, majf=0, minf=48 00:34:29.982 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename1: (groupid=0, jobs=1): err= 0: pid=2070033: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10024msec) 00:34:29.982 slat (usec): min=9, max=123, avg=44.84, stdev=14.94 00:34:29.982 clat (usec): min=25891, max=46201, avg=33103.99, stdev=1342.35 00:34:29.982 lat (usec): min=25931, max=46216, avg=33148.83, stdev=1342.56 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:34:29.982 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:29.982 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.982 | 99.00th=[41681], 99.50th=[43779], 99.90th=[45876], 99.95th=[46400], 00:34:29.982 | 99.99th=[46400] 00:34:29.982 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1907.20, stdev=39.40, samples=20 00:34:29.982 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:34:29.982 lat (msec) : 50=100.00% 00:34:29.982 cpu : usr=95.83%, sys=2.50%, ctx=165, majf=0, minf=38 00:34:29.982 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename1: (groupid=0, jobs=1): err= 0: pid=2070034: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10024msec) 00:34:29.982 slat (usec): min=7, max=136, avg=49.70, stdev=18.19 00:34:29.982 clat (usec): min=25893, max=46203, avg=33072.67, stdev=1341.47 00:34:29.982 lat (usec): min=25935, max=46217, avg=33122.36, stdev=1342.10 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:29.982 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:29.982 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.982 | 99.00th=[41681], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:34:29.982 | 99.99th=[46400] 00:34:29.982 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1907.20, stdev=39.40, samples=20 00:34:29.982 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:34:29.982 lat (msec) : 50=100.00% 00:34:29.982 cpu : usr=96.82%, sys=1.97%, ctx=104, majf=0, minf=43 00:34:29.982 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename1: (groupid=0, jobs=1): err= 0: pid=2070035: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10024msec) 00:34:29.982 slat (usec): min=10, max=129, avg=48.88, stdev=18.67 00:34:29.982 clat (usec): min=25228, max=46071, avg=33091.24, stdev=1372.58 00:34:29.982 lat (usec): min=25298, max=46086, avg=33140.12, stdev=1369.02 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:29.982 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:29.982 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.982 | 99.00th=[41681], 99.50th=[43779], 99.90th=[45876], 99.95th=[45876], 00:34:29.982 | 99.99th=[45876] 00:34:29.982 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1907.20, stdev=39.40, samples=20 00:34:29.982 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:34:29.982 lat (msec) : 50=100.00% 00:34:29.982 cpu : usr=95.68%, sys=2.90%, ctx=130, majf=0, minf=54 00:34:29.982 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename1: (groupid=0, jobs=1): err= 0: pid=2070036: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=481, BW=1927KiB/s (1973kB/s)(18.9MiB/10029msec) 00:34:29.982 slat (nsec): min=3914, max=95345, avg=40062.25, stdev=14939.92 00:34:29.982 clat (usec): min=6499, max=43861, avg=32891.25, stdev=2983.07 00:34:29.982 lat (usec): min=6511, max=43884, avg=32931.31, stdev=2985.13 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[15664], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:34:29.982 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:29.982 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.982 | 99.00th=[39584], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:34:29.982 | 99.99th=[43779] 00:34:29.982 bw ( KiB/s): min= 1792, max= 2180, per=4.20%, avg=1926.60, stdev=66.14, samples=20 00:34:29.982 iops : min= 448, max= 545, avg=481.65, stdev=16.53, samples=20 00:34:29.982 lat (msec) : 10=0.95%, 20=0.37%, 50=98.68% 00:34:29.982 cpu : usr=93.98%, sys=3.35%, ctx=296, majf=0, minf=48 00:34:29.982 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename1: (groupid=0, jobs=1): err= 0: pid=2070037: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10010msec) 00:34:29.982 slat (nsec): min=4111, max=51184, avg=16555.96, stdev=8564.67 00:34:29.982 clat (usec): min=24704, max=70440, avg=33418.28, stdev=1998.50 00:34:29.982 lat (usec): min=24716, max=70454, avg=33434.84, stdev=1997.90 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:34:29.982 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:29.982 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:34:29.982 | 99.00th=[41157], 99.50th=[42206], 99.90th=[62129], 99.95th=[62129], 00:34:29.982 | 99.99th=[70779] 00:34:29.982 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1899.79, stdev=64.19, samples=19 00:34:29.982 iops : min= 448, max= 512, avg=474.95, stdev=16.05, samples=19 00:34:29.982 lat (msec) : 50=99.66%, 100=0.34% 00:34:29.982 cpu : usr=97.59%, sys=1.65%, ctx=98, majf=0, minf=47 00:34:29.982 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:29.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.982 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.982 filename1: (groupid=0, jobs=1): err= 0: pid=2070038: Mon Jul 15 10:07:45 2024 00:34:29.982 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10015msec) 00:34:29.982 slat (usec): min=3, max=129, avg=43.70, stdev=22.80 00:34:29.982 clat (usec): min=15627, max=72137, avg=33172.68, stdev=3517.65 00:34:29.982 lat (usec): min=15651, max=72152, avg=33216.38, stdev=3516.74 00:34:29.982 clat percentiles (usec): 00:34:29.982 | 1.00th=[22676], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:34:29.982 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:29.982 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:34:29.982 | 99.00th=[44827], 99.50th=[55837], 99.90th=[71828], 99.95th=[71828], 00:34:29.982 | 99.99th=[71828] 00:34:29.982 bw ( KiB/s): min= 1715, max= 1968, per=4.15%, avg=1901.20, stdev=57.62, samples=20 00:34:29.982 iops : min= 428, max= 492, avg=475.25, stdev=14.53, samples=20 00:34:29.983 lat (msec) : 20=0.44%, 50=98.58%, 100=0.98% 00:34:29.983 cpu : usr=94.65%, sys=2.94%, ctx=147, majf=0, minf=85 00:34:29.983 IO depths : 1=5.0%, 2=10.3%, 4=21.6%, 8=54.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:34:29.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 complete : 0=0.0%, 4=93.4%, 8=1.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 issued rwts: total=4772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.983 filename2: (groupid=0, jobs=1): err= 0: pid=2070039: Mon Jul 15 10:07:45 2024 00:34:29.983 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10023msec) 00:34:29.983 slat (usec): min=8, max=148, avg=51.92, stdev=20.98 00:34:29.983 clat (usec): min=26085, max=46727, avg=33078.34, stdev=1389.71 00:34:29.983 lat (usec): min=26180, max=46742, avg=33130.26, stdev=1387.03 00:34:29.983 clat percentiles (usec): 00:34:29.983 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:29.983 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:29.983 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.983 | 99.00th=[41681], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:34:29.983 | 99.99th=[46924] 00:34:29.983 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1907.20, stdev=39.40, samples=20 00:34:29.983 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:34:29.983 lat (msec) : 50=100.00% 00:34:29.983 cpu : usr=93.47%, sys=3.70%, ctx=253, majf=0, minf=51 00:34:29.983 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:29.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.983 filename2: (groupid=0, jobs=1): err= 0: pid=2070040: Mon Jul 15 10:07:45 2024 00:34:29.983 read: IOPS=481, BW=1927KiB/s (1973kB/s)(18.9MiB/10029msec) 00:34:29.983 slat (usec): min=5, max=162, avg=43.51, stdev=24.31 00:34:29.983 clat (usec): min=6389, max=43880, avg=32856.99, stdev=2959.33 00:34:29.983 lat (usec): min=6401, max=43902, avg=32900.50, stdev=2961.29 00:34:29.983 clat percentiles (usec): 00:34:29.983 | 1.00th=[16319], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:29.983 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:29.983 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.983 | 99.00th=[39060], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:34:29.983 | 99.99th=[43779] 00:34:29.983 bw ( KiB/s): min= 1792, max= 2176, per=4.20%, avg=1926.40, stdev=65.33, samples=20 00:34:29.983 iops : min= 448, max= 544, avg=481.60, stdev=16.33, samples=20 00:34:29.983 lat (msec) : 10=0.99%, 20=0.33%, 50=98.68% 00:34:29.983 cpu : usr=91.81%, sys=4.09%, ctx=475, majf=0, minf=56 00:34:29.983 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:29.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.983 filename2: (groupid=0, jobs=1): err= 0: pid=2070041: Mon Jul 15 10:07:45 2024 00:34:29.983 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10024msec) 00:34:29.983 slat (usec): min=7, max=111, avg=45.89, stdev=15.27 00:34:29.983 clat (usec): min=22817, max=47391, avg=33104.16, stdev=1510.00 00:34:29.983 lat (usec): min=22832, max=47414, avg=33150.05, stdev=1510.43 00:34:29.983 clat percentiles (usec): 00:34:29.983 | 1.00th=[27395], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:29.983 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:29.983 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.983 | 99.00th=[41681], 99.50th=[43779], 99.90th=[46400], 99.95th=[46400], 00:34:29.983 | 99.99th=[47449] 00:34:29.983 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1907.20, stdev=39.40, samples=20 00:34:29.983 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:34:29.983 lat (msec) : 50=100.00% 00:34:29.983 cpu : usr=94.11%, sys=3.16%, ctx=190, majf=0, minf=37 00:34:29.983 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:29.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.983 filename2: (groupid=0, jobs=1): err= 0: pid=2070042: Mon Jul 15 10:07:45 2024 00:34:29.983 read: IOPS=481, BW=1927KiB/s (1973kB/s)(18.8MiB/10010msec) 00:34:29.983 slat (nsec): min=5250, max=95766, avg=18061.06, stdev=13267.83 00:34:29.983 clat (usec): min=20282, max=96607, avg=33128.90, stdev=5038.37 00:34:29.983 lat (usec): min=20290, max=96623, avg=33146.96, stdev=5036.48 00:34:29.983 clat percentiles (usec): 00:34:29.983 | 1.00th=[22414], 5.00th=[24249], 10.00th=[27657], 20.00th=[31589], 00:34:29.983 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:29.983 | 70.00th=[33817], 80.00th=[33817], 90.00th=[38011], 95.00th=[41681], 00:34:29.983 | 99.00th=[44303], 99.50th=[44827], 99.90th=[83362], 99.95th=[83362], 00:34:29.983 | 99.99th=[96994] 00:34:29.983 bw ( KiB/s): min= 1680, max= 2032, per=4.20%, avg=1926.40, stdev=73.12, samples=20 00:34:29.983 iops : min= 420, max= 508, avg=481.60, stdev=18.28, samples=20 00:34:29.983 lat (msec) : 50=99.67%, 100=0.33% 00:34:29.983 cpu : usr=97.42%, sys=1.81%, ctx=48, majf=0, minf=47 00:34:29.983 IO depths : 1=0.1%, 2=0.1%, 4=2.2%, 8=81.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:34:29.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 complete : 0=0.0%, 4=88.9%, 8=9.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.983 filename2: (groupid=0, jobs=1): err= 0: pid=2070043: Mon Jul 15 10:07:45 2024 00:34:29.983 read: IOPS=475, BW=1904KiB/s (1950kB/s)(18.6MiB/10017msec) 00:34:29.983 slat (nsec): min=6053, max=62324, avg=28657.61, stdev=11636.58 00:34:29.983 clat (usec): min=14206, max=90058, avg=33385.43, stdev=2869.65 00:34:29.983 lat (usec): min=14219, max=90078, avg=33414.09, stdev=2868.31 00:34:29.983 clat percentiles (usec): 00:34:29.983 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:34:29.983 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:34:29.983 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:34:29.983 | 99.00th=[42730], 99.50th=[43779], 99.90th=[72877], 99.95th=[72877], 00:34:29.983 | 99.99th=[89654] 00:34:29.983 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1899.45, stdev=62.49, samples=20 00:34:29.983 iops : min= 416, max= 480, avg=474.85, stdev=15.62, samples=20 00:34:29.983 lat (msec) : 20=0.17%, 50=99.37%, 100=0.46% 00:34:29.983 cpu : usr=98.24%, sys=1.38%, ctx=17, majf=0, minf=43 00:34:29.983 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:29.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.983 filename2: (groupid=0, jobs=1): err= 0: pid=2070044: Mon Jul 15 10:07:45 2024 00:34:29.983 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10023msec) 00:34:29.983 slat (nsec): min=6966, max=91226, avg=39987.43, stdev=13728.00 00:34:29.983 clat (usec): min=6136, max=43835, avg=32983.66, stdev=2486.63 00:34:29.983 lat (usec): min=6157, max=43857, avg=33023.65, stdev=2487.08 00:34:29.983 clat percentiles (usec): 00:34:29.983 | 1.00th=[24249], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:34:29.983 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:29.983 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:34:29.983 | 99.00th=[39060], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:34:29.983 | 99.99th=[43779] 00:34:29.983 bw ( KiB/s): min= 1792, max= 2176, per=4.19%, avg=1920.00, stdev=71.93, samples=20 00:34:29.983 iops : min= 448, max= 544, avg=480.00, stdev=17.98, samples=20 00:34:29.983 lat (msec) : 10=0.60%, 20=0.35%, 50=99.04% 00:34:29.983 cpu : usr=98.35%, sys=1.26%, ctx=15, majf=0, minf=39 00:34:29.983 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:29.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.983 filename2: (groupid=0, jobs=1): err= 0: pid=2070045: Mon Jul 15 10:07:45 2024 00:34:29.983 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10001msec) 00:34:29.983 slat (nsec): min=8057, max=60687, avg=16088.84, stdev=9838.74 00:34:29.983 clat (usec): min=13700, max=52808, avg=33424.39, stdev=1451.38 00:34:29.983 lat (usec): min=13711, max=52823, avg=33440.48, stdev=1451.26 00:34:29.983 clat percentiles (usec): 00:34:29.983 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:34:29.983 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:29.983 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:34:29.983 | 99.00th=[43254], 99.50th=[43779], 99.90th=[49021], 99.95th=[49546], 00:34:29.983 | 99.99th=[52691] 00:34:29.983 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1906.53, stdev=72.59, samples=19 00:34:29.983 iops : min= 448, max= 512, avg=476.63, stdev=18.15, samples=19 00:34:29.983 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:34:29.983 cpu : usr=98.04%, sys=1.55%, ctx=20, majf=0, minf=68 00:34:29.983 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:29.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.983 filename2: (groupid=0, jobs=1): err= 0: pid=2070046: Mon Jul 15 10:07:45 2024 00:34:29.983 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.7MiB/10011msec) 00:34:29.983 slat (usec): min=5, max=118, avg=30.42, stdev=13.44 00:34:29.983 clat (usec): min=14184, max=92718, avg=33161.95, stdev=3913.94 00:34:29.983 lat (usec): min=14266, max=92739, avg=33192.37, stdev=3913.13 00:34:29.983 clat percentiles (usec): 00:34:29.983 | 1.00th=[22414], 5.00th=[31065], 10.00th=[32375], 20.00th=[32637], 00:34:29.983 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:29.983 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:34:29.983 | 99.00th=[43254], 99.50th=[47973], 99.90th=[83362], 99.95th=[92799], 00:34:29.983 | 99.99th=[92799] 00:34:29.983 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1910.40, stdev=76.89, samples=20 00:34:29.983 iops : min= 416, max= 512, avg=477.60, stdev=19.22, samples=20 00:34:29.983 lat (msec) : 20=0.13%, 50=99.54%, 100=0.33% 00:34:29.983 cpu : usr=98.13%, sys=1.41%, ctx=34, majf=0, minf=56 00:34:29.983 IO depths : 1=5.1%, 2=10.3%, 4=21.3%, 8=55.3%, 16=8.0%, 32=0.0%, >=64=0.0% 00:34:29.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 complete : 0=0.0%, 4=93.2%, 8=1.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.983 issued rwts: total=4792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.983 00:34:29.983 Run status group 0 (all jobs): 00:34:29.983 READ: bw=44.7MiB/s (46.9MB/s), 1899KiB/s-1927KiB/s (1945kB/s-1973kB/s), io=449MiB (470MB), run=10001-10029msec 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:29.983 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.984 bdev_null0 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.984 [2024-07-15 10:07:45.427657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.984 bdev_null1 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:29.984 { 00:34:29.984 "params": { 00:34:29.984 "name": "Nvme$subsystem", 00:34:29.984 "trtype": "$TEST_TRANSPORT", 00:34:29.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.984 "adrfam": "ipv4", 00:34:29.984 "trsvcid": "$NVMF_PORT", 00:34:29.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.984 "hdgst": ${hdgst:-false}, 00:34:29.984 "ddgst": ${ddgst:-false} 00:34:29.984 }, 00:34:29.984 "method": "bdev_nvme_attach_controller" 00:34:29.984 } 00:34:29.984 EOF 00:34:29.984 )") 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:29.984 { 00:34:29.984 "params": { 00:34:29.984 "name": "Nvme$subsystem", 00:34:29.984 "trtype": "$TEST_TRANSPORT", 00:34:29.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.984 "adrfam": "ipv4", 00:34:29.984 "trsvcid": "$NVMF_PORT", 00:34:29.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.984 "hdgst": ${hdgst:-false}, 00:34:29.984 "ddgst": ${ddgst:-false} 00:34:29.984 }, 00:34:29.984 "method": "bdev_nvme_attach_controller" 00:34:29.984 } 00:34:29.984 EOF 00:34:29.984 )") 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:29.984 "params": { 00:34:29.984 "name": "Nvme0", 00:34:29.984 "trtype": "tcp", 00:34:29.984 "traddr": "10.0.0.2", 00:34:29.984 "adrfam": "ipv4", 00:34:29.984 "trsvcid": "4420", 00:34:29.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.984 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.984 "hdgst": false, 00:34:29.984 "ddgst": false 00:34:29.984 }, 00:34:29.984 "method": "bdev_nvme_attach_controller" 00:34:29.984 },{ 00:34:29.984 "params": { 00:34:29.984 "name": "Nvme1", 00:34:29.984 "trtype": "tcp", 00:34:29.984 "traddr": "10.0.0.2", 00:34:29.984 "adrfam": "ipv4", 00:34:29.984 "trsvcid": "4420", 00:34:29.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:29.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:29.984 "hdgst": false, 00:34:29.984 "ddgst": false 00:34:29.984 }, 00:34:29.984 "method": "bdev_nvme_attach_controller" 00:34:29.984 }' 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.984 10:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.984 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:29.984 ... 00:34:29.984 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:29.984 ... 00:34:29.984 fio-3.35 00:34:29.984 Starting 4 threads 00:34:29.984 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.272 00:34:35.272 filename0: (groupid=0, jobs=1): err= 0: pid=2071312: Mon Jul 15 10:07:51 2024 00:34:35.272 read: IOPS=1831, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5003msec) 00:34:35.272 slat (nsec): min=6523, max=51194, avg=11226.06, stdev=4311.49 00:34:35.272 clat (usec): min=1004, max=7995, avg=4331.97, stdev=758.33 00:34:35.272 lat (usec): min=1017, max=8009, avg=4343.20, stdev=758.32 00:34:35.272 clat percentiles (usec): 00:34:35.272 | 1.00th=[ 2868], 5.00th=[ 3326], 10.00th=[ 3621], 20.00th=[ 3818], 00:34:35.272 | 30.00th=[ 3949], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4293], 00:34:35.272 | 70.00th=[ 4424], 80.00th=[ 4686], 90.00th=[ 5473], 95.00th=[ 6063], 00:34:35.272 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 7439], 00:34:35.272 | 99.99th=[ 7963] 00:34:35.272 bw ( KiB/s): min=13184, max=16208, per=24.81%, avg=14650.80, stdev=778.90, samples=10 00:34:35.272 iops : min= 1648, max= 2026, avg=1831.30, stdev=97.39, samples=10 00:34:35.272 lat (msec) : 2=0.09%, 4=34.28%, 10=65.63% 00:34:35.272 cpu : usr=93.22%, sys=6.32%, ctx=7, majf=0, minf=0 00:34:35.272 IO depths : 1=0.1%, 2=5.4%, 4=66.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.272 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.272 issued rwts: total=9163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.272 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.272 filename0: (groupid=0, jobs=1): err= 0: pid=2071313: Mon Jul 15 10:07:51 2024 00:34:35.272 read: IOPS=1849, BW=14.5MiB/s (15.2MB/s)(72.3MiB/5002msec) 00:34:35.272 slat (nsec): min=6576, max=61309, avg=11542.80, stdev=4677.26 00:34:35.272 clat (usec): min=809, max=7963, avg=4290.02, stdev=728.43 00:34:35.272 lat (usec): min=821, max=7978, avg=4301.56, stdev=728.21 00:34:35.272 clat percentiles (usec): 00:34:35.272 | 1.00th=[ 2802], 5.00th=[ 3294], 10.00th=[ 3589], 20.00th=[ 3818], 00:34:35.272 | 30.00th=[ 3949], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4293], 00:34:35.272 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5211], 95.00th=[ 5997], 00:34:35.272 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 7635], 99.95th=[ 7701], 00:34:35.272 | 99.99th=[ 7963] 00:34:35.272 bw ( KiB/s): min=14108, max=15504, per=25.05%, avg=14793.20, stdev=462.86, samples=10 00:34:35.272 iops : min= 1763, max= 1938, avg=1849.10, stdev=57.94, samples=10 00:34:35.272 lat (usec) : 1000=0.01% 00:34:35.272 lat (msec) : 2=0.15%, 4=33.68%, 10=66.16% 00:34:35.272 cpu : usr=92.30%, sys=7.22%, ctx=11, majf=0, minf=9 00:34:35.272 IO depths : 1=0.1%, 2=4.3%, 4=66.2%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.272 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.272 issued rwts: total=9252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.272 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.272 filename1: (groupid=0, jobs=1): err= 0: pid=2071314: Mon Jul 15 10:07:51 2024 00:34:35.272 read: IOPS=1878, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5004msec) 00:34:35.272 slat (nsec): min=6328, max=60012, avg=11357.21, stdev=4364.16 00:34:35.272 clat (usec): min=816, max=7756, avg=4224.10, stdev=749.69 00:34:35.272 lat (usec): min=829, max=7764, avg=4235.45, stdev=749.64 00:34:35.272 clat percentiles (usec): 00:34:35.272 | 1.00th=[ 2737], 5.00th=[ 3261], 10.00th=[ 3490], 20.00th=[ 3720], 00:34:35.272 | 30.00th=[ 3884], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4228], 00:34:35.272 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5145], 95.00th=[ 5997], 00:34:35.272 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7504], 00:34:35.272 | 99.99th=[ 7767] 00:34:35.272 bw ( KiB/s): min=13920, max=17088, per=25.45%, avg=15028.80, stdev=904.31, samples=10 00:34:35.272 iops : min= 1740, max= 2136, avg=1878.60, stdev=113.04, samples=10 00:34:35.272 lat (usec) : 1000=0.02% 00:34:35.272 lat (msec) : 2=0.02%, 4=39.49%, 10=60.47% 00:34:35.272 cpu : usr=92.64%, sys=6.86%, ctx=13, majf=0, minf=0 00:34:35.272 IO depths : 1=0.1%, 2=4.2%, 4=66.7%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.272 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.272 issued rwts: total=9398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.272 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.272 filename1: (groupid=0, jobs=1): err= 0: pid=2071315: Mon Jul 15 10:07:51 2024 00:34:35.272 read: IOPS=1823, BW=14.2MiB/s (14.9MB/s)(71.3MiB/5003msec) 00:34:35.272 slat (nsec): min=6570, max=45826, avg=11529.13, stdev=4606.50 00:34:35.272 clat (usec): min=955, max=8099, avg=4350.46, stdev=755.33 00:34:35.272 lat (usec): min=968, max=8112, avg=4361.99, stdev=755.06 00:34:35.272 clat percentiles (usec): 00:34:35.272 | 1.00th=[ 2835], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3851], 00:34:35.272 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4359], 00:34:35.272 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5473], 95.00th=[ 6063], 00:34:35.272 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7635], 99.95th=[ 7963], 00:34:35.272 | 99.99th=[ 8094] 00:34:35.272 bw ( KiB/s): min=13792, max=15280, per=24.70%, avg=14587.20, stdev=564.88, samples=10 00:34:35.272 iops : min= 1724, max= 1910, avg=1823.40, stdev=70.61, samples=10 00:34:35.272 lat (usec) : 1000=0.02% 00:34:35.272 lat (msec) : 2=0.08%, 4=30.49%, 10=69.41% 00:34:35.272 cpu : usr=92.60%, sys=6.90%, ctx=8, majf=0, minf=9 00:34:35.272 IO depths : 1=0.1%, 2=4.1%, 4=67.2%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.272 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.272 issued rwts: total=9125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.272 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.272 00:34:35.272 Run status group 0 (all jobs): 00:34:35.272 READ: bw=57.7MiB/s (60.5MB/s), 14.2MiB/s-14.7MiB/s (14.9MB/s-15.4MB/s), io=289MiB (303MB), run=5002-5004msec 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.272 00:34:35.272 real 0m24.285s 00:34:35.272 user 4m28.847s 00:34:35.272 sys 0m8.945s 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:35.272 10:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.272 ************************************ 00:34:35.272 END TEST fio_dif_rand_params 00:34:35.272 ************************************ 00:34:35.272 10:07:51 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:35.272 10:07:51 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:35.272 10:07:51 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:35.272 10:07:51 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:35.272 10:07:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:35.272 ************************************ 00:34:35.272 START TEST fio_dif_digest 00:34:35.272 ************************************ 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.272 bdev_null0 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.272 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.273 [2024-07-15 10:07:51.962543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:35.273 { 00:34:35.273 "params": { 00:34:35.273 "name": "Nvme$subsystem", 00:34:35.273 "trtype": "$TEST_TRANSPORT", 00:34:35.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:35.273 "adrfam": "ipv4", 00:34:35.273 "trsvcid": "$NVMF_PORT", 00:34:35.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:35.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:35.273 "hdgst": ${hdgst:-false}, 00:34:35.273 "ddgst": ${ddgst:-false} 00:34:35.273 }, 00:34:35.273 "method": "bdev_nvme_attach_controller" 00:34:35.273 } 00:34:35.273 EOF 00:34:35.273 )") 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:35.273 "params": { 00:34:35.273 "name": "Nvme0", 00:34:35.273 "trtype": "tcp", 00:34:35.273 "traddr": "10.0.0.2", 00:34:35.273 "adrfam": "ipv4", 00:34:35.273 "trsvcid": "4420", 00:34:35.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:35.273 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:35.273 "hdgst": true, 00:34:35.273 "ddgst": true 00:34:35.273 }, 00:34:35.273 "method": "bdev_nvme_attach_controller" 00:34:35.273 }' 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:35.273 10:07:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:35.273 10:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:35.273 10:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:35.273 10:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:35.273 10:07:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.529 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:35.529 ... 00:34:35.529 fio-3.35 00:34:35.529 Starting 3 threads 00:34:35.529 EAL: No free 2048 kB hugepages reported on node 1 00:34:47.731 00:34:47.731 filename0: (groupid=0, jobs=1): err= 0: pid=2072180: Mon Jul 15 10:08:02 2024 00:34:47.731 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10048msec) 00:34:47.731 slat (nsec): min=5157, max=49796, avg=18311.23, stdev=4549.39 00:34:47.731 clat (usec): min=9357, max=58477, avg=14565.55, stdev=1681.83 00:34:47.731 lat (usec): min=9387, max=58500, avg=14583.86, stdev=1681.81 00:34:47.731 clat percentiles (usec): 00:34:47.731 | 1.00th=[11600], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:34:47.731 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:34:47.731 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:34:47.731 | 99.00th=[17171], 99.50th=[17957], 99.90th=[22152], 99.95th=[51643], 00:34:47.731 | 99.99th=[58459] 00:34:47.731 bw ( KiB/s): min=25344, max=27392, per=33.02%, avg=26380.80, stdev=501.62, samples=20 00:34:47.731 iops : min= 198, max= 214, avg=206.10, stdev= 3.92, samples=20 00:34:47.731 lat (msec) : 10=0.24%, 20=99.52%, 50=0.15%, 100=0.10% 00:34:47.731 cpu : usr=89.96%, sys=8.53%, ctx=570, majf=0, minf=82 00:34:47.731 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.731 issued rwts: total=2063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.731 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:47.731 filename0: (groupid=0, jobs=1): err= 0: pid=2072181: Mon Jul 15 10:08:02 2024 00:34:47.731 read: IOPS=216, BW=27.0MiB/s (28.3MB/s)(271MiB/10008msec) 00:34:47.731 slat (usec): min=4, max=440, avg=14.31, stdev= 9.49 00:34:47.731 clat (usec): min=8243, max=23786, avg=13854.75, stdev=1212.20 00:34:47.731 lat (usec): min=8256, max=23809, avg=13869.06, stdev=1212.12 00:34:47.731 clat percentiles (usec): 00:34:47.731 | 1.00th=[ 9896], 5.00th=[11994], 10.00th=[12387], 20.00th=[13042], 00:34:47.731 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:34:47.731 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:34:47.731 | 99.00th=[16450], 99.50th=[16909], 99.90th=[21627], 99.95th=[21627], 00:34:47.731 | 99.99th=[23725] 00:34:47.731 bw ( KiB/s): min=26624, max=29184, per=34.62%, avg=27660.80, stdev=716.78, samples=20 00:34:47.731 iops : min= 208, max= 228, avg=216.10, stdev= 5.60, samples=20 00:34:47.731 lat (msec) : 10=1.11%, 20=98.75%, 50=0.14% 00:34:47.731 cpu : usr=91.12%, sys=7.57%, ctx=175, majf=0, minf=136 00:34:47.732 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.732 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.732 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:47.732 filename0: (groupid=0, jobs=1): err= 0: pid=2072182: Mon Jul 15 10:08:02 2024 00:34:47.732 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(256MiB/10047msec) 00:34:47.732 slat (nsec): min=4995, max=29319, avg=13769.05, stdev=1611.31 00:34:47.732 clat (usec): min=10290, max=57691, avg=14708.95, stdev=2730.04 00:34:47.732 lat (usec): min=10303, max=57704, avg=14722.72, stdev=2730.02 00:34:47.732 clat percentiles (usec): 00:34:47.732 | 1.00th=[12125], 5.00th=[12911], 10.00th=[13173], 20.00th=[13698], 00:34:47.732 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:34:47.732 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:34:47.732 | 99.00th=[17433], 99.50th=[22676], 99.90th=[56886], 99.95th=[57410], 00:34:47.732 | 99.99th=[57934] 00:34:47.732 bw ( KiB/s): min=24576, max=27136, per=32.71%, avg=26127.30, stdev=704.26, samples=20 00:34:47.732 iops : min= 192, max= 212, avg=204.10, stdev= 5.52, samples=20 00:34:47.732 lat (msec) : 20=99.46%, 50=0.24%, 100=0.29% 00:34:47.732 cpu : usr=92.43%, sys=7.04%, ctx=69, majf=0, minf=85 00:34:47.732 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.732 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.732 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:47.732 00:34:47.732 Run status group 0 (all jobs): 00:34:47.732 READ: bw=78.0MiB/s (81.8MB/s), 25.4MiB/s-27.0MiB/s (26.7MB/s-28.3MB/s), io=784MiB (822MB), run=10008-10048msec 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.732 00:34:47.732 real 0m11.276s 00:34:47.732 user 0m28.734s 00:34:47.732 sys 0m2.618s 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:47.732 10:08:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:47.732 ************************************ 00:34:47.732 END TEST fio_dif_digest 00:34:47.732 ************************************ 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:47.732 10:08:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:47.732 10:08:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:47.732 rmmod nvme_tcp 00:34:47.732 rmmod nvme_fabrics 00:34:47.732 rmmod nvme_keyring 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2066135 ']' 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2066135 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2066135 ']' 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2066135 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2066135 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2066135' 00:34:47.732 killing process with pid 2066135 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2066135 00:34:47.732 10:08:03 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2066135 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:34:47.732 10:08:03 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:47.732 Waiting for block devices as requested 00:34:47.991 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:47.991 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:48.250 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:48.250 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:48.250 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:48.250 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:48.510 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:48.510 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:48.510 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:48.510 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:48.770 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:48.770 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:48.770 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:48.770 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:49.030 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:49.030 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:49.030 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:49.289 10:08:05 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:49.289 10:08:05 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:49.289 10:08:05 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:49.289 10:08:05 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:49.289 10:08:05 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.289 10:08:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:49.289 10:08:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.241 10:08:07 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:51.241 00:34:51.241 real 1m6.682s 00:34:51.241 user 6m24.774s 00:34:51.241 sys 0m20.948s 00:34:51.241 10:08:07 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:51.241 10:08:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:51.241 ************************************ 00:34:51.241 END TEST nvmf_dif 00:34:51.241 ************************************ 00:34:51.241 10:08:07 -- common/autotest_common.sh@1142 -- # return 0 00:34:51.241 10:08:07 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:51.241 10:08:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:51.241 10:08:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:51.241 10:08:07 -- common/autotest_common.sh@10 -- # set +x 00:34:51.241 ************************************ 00:34:51.241 START TEST nvmf_abort_qd_sizes 00:34:51.241 ************************************ 00:34:51.241 10:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:51.242 * Looking for test storage... 00:34:51.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.242 10:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:34:51.242 10:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:53.147 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:53.147 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:53.147 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:53.147 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:53.147 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.406 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.406 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.406 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:53.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:34:53.406 00:34:53.406 --- 10.0.0.2 ping statistics --- 00:34:53.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.406 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:34:53.406 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:34:53.406 00:34:53.406 --- 10.0.0.1 ping statistics --- 00:34:53.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.407 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:34:53.407 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.407 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:34:53.407 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:53.407 10:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:54.345 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:54.345 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:54.345 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:54.345 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:54.345 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:54.345 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:54.345 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:54.345 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:54.345 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:54.345 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:54.604 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:54.604 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:54.604 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:54.604 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:54.604 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:54.604 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:55.542 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2076963 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2076963 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2076963 ']' 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:55.542 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:55.542 [2024-07-15 10:08:12.273337] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:34:55.542 [2024-07-15 10:08:12.273417] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.542 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.542 [2024-07-15 10:08:12.313920] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:55.799 [2024-07-15 10:08:12.345829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:55.799 [2024-07-15 10:08:12.439681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.799 [2024-07-15 10:08:12.439738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.799 [2024-07-15 10:08:12.439755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.799 [2024-07-15 10:08:12.439768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.799 [2024-07-15 10:08:12.439785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.799 [2024-07-15 10:08:12.439843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.799 [2024-07-15 10:08:12.439905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:55.799 [2024-07-15 10:08:12.439951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:55.800 [2024-07-15 10:08:12.443905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.800 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:55.800 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:34:55.800 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:55.800 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:55.800 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:56.058 10:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:56.058 ************************************ 00:34:56.058 START TEST spdk_target_abort 00:34:56.058 ************************************ 00:34:56.058 10:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:34:56.058 10:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:56.058 10:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:34:56.058 10:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.058 10:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.344 spdk_targetn1 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.344 [2024-07-15 10:08:15.457094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.344 [2024-07-15 10:08:15.489356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:59.344 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:59.345 10:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:59.345 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.636 Initializing NVMe Controllers 00:35:02.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:02.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:02.636 Initialization complete. Launching workers. 00:35:02.636 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10094, failed: 0 00:35:02.636 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 8854 00:35:02.636 success 742, unsuccess 498, failed 0 00:35:02.636 10:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:02.636 10:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:02.636 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.921 Initializing NVMe Controllers 00:35:05.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:05.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:05.921 Initialization complete. Launching workers. 00:35:05.921 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8526, failed: 0 00:35:05.921 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1232, failed to submit 7294 00:35:05.921 success 325, unsuccess 907, failed 0 00:35:05.921 10:08:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:05.921 10:08:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:05.921 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.209 Initializing NVMe Controllers 00:35:09.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:09.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:09.209 Initialization complete. Launching workers. 00:35:09.210 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31318, failed: 0 00:35:09.210 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2776, failed to submit 28542 00:35:09.210 success 530, unsuccess 2246, failed 0 00:35:09.210 10:08:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:09.210 10:08:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.210 10:08:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:09.210 10:08:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.210 10:08:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:09.210 10:08:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.210 10:08:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2076963 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2076963 ']' 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2076963 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2076963 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2076963' 00:35:10.153 killing process with pid 2076963 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2076963 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2076963 00:35:10.153 00:35:10.153 real 0m14.299s 00:35:10.153 user 0m54.276s 00:35:10.153 sys 0m2.562s 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:10.153 10:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.153 ************************************ 00:35:10.153 END TEST spdk_target_abort 00:35:10.153 ************************************ 00:35:10.441 10:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:35:10.441 10:08:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:10.441 10:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:10.441 10:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:10.441 10:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:10.441 ************************************ 00:35:10.441 START TEST kernel_target_abort 00:35:10.441 ************************************ 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:10.441 10:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:11.375 Waiting for block devices as requested 00:35:11.375 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:11.633 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:11.633 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:11.633 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:11.891 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:11.891 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:11.891 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:11.891 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:11.891 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:12.200 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:12.200 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:12.200 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:12.200 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:12.458 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:12.458 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:12.458 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:12.458 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:12.716 No valid GPT data, bailing 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:12.716 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:12.973 00:35:12.973 Discovery Log Number of Records 2, Generation counter 2 00:35:12.973 =====Discovery Log Entry 0====== 00:35:12.973 trtype: tcp 00:35:12.973 adrfam: ipv4 00:35:12.973 subtype: current discovery subsystem 00:35:12.973 treq: not specified, sq flow control disable supported 00:35:12.973 portid: 1 00:35:12.973 trsvcid: 4420 00:35:12.973 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:12.973 traddr: 10.0.0.1 00:35:12.973 eflags: none 00:35:12.973 sectype: none 00:35:12.973 =====Discovery Log Entry 1====== 00:35:12.973 trtype: tcp 00:35:12.973 adrfam: ipv4 00:35:12.973 subtype: nvme subsystem 00:35:12.974 treq: not specified, sq flow control disable supported 00:35:12.974 portid: 1 00:35:12.974 trsvcid: 4420 00:35:12.974 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:12.974 traddr: 10.0.0.1 00:35:12.974 eflags: none 00:35:12.974 sectype: none 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.974 10:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.974 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.258 Initializing NVMe Controllers 00:35:16.258 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:16.258 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:16.258 Initialization complete. Launching workers. 00:35:16.258 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35974, failed: 0 00:35:16.258 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35974, failed to submit 0 00:35:16.258 success 0, unsuccess 35974, failed 0 00:35:16.258 10:08:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:16.258 10:08:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:16.258 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.550 Initializing NVMe Controllers 00:35:19.550 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:19.550 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:19.550 Initialization complete. Launching workers. 00:35:19.550 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65224, failed: 0 00:35:19.550 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16450, failed to submit 48774 00:35:19.550 success 0, unsuccess 16450, failed 0 00:35:19.550 10:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:19.550 10:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:19.550 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.081 Initializing NVMe Controllers 00:35:22.081 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:22.081 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:22.081 Initialization complete. Launching workers. 00:35:22.081 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63622, failed: 0 00:35:22.081 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15894, failed to submit 47728 00:35:22.081 success 0, unsuccess 15894, failed 0 00:35:22.081 10:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:22.081 10:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:22.081 10:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:35:22.341 10:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:22.341 10:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:22.341 10:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:22.341 10:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:22.341 10:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:22.341 10:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:22.341 10:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:23.718 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:23.718 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:23.718 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:23.718 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:23.718 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:23.718 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:23.718 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:23.718 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:23.718 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:23.718 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:23.718 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:23.718 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:23.718 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:23.718 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:23.718 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:23.718 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:24.656 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:24.656 00:35:24.656 real 0m14.337s 00:35:24.656 user 0m5.326s 00:35:24.656 sys 0m3.352s 00:35:24.656 10:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:24.656 10:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:24.656 ************************************ 00:35:24.656 END TEST kernel_target_abort 00:35:24.656 ************************************ 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:24.656 rmmod nvme_tcp 00:35:24.656 rmmod nvme_fabrics 00:35:24.656 rmmod nvme_keyring 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2076963 ']' 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2076963 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2076963 ']' 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2076963 00:35:24.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2076963) - No such process 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2076963 is not found' 00:35:24.656 Process with pid 2076963 is not found 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:24.656 10:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:26.033 Waiting for block devices as requested 00:35:26.033 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:26.033 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:26.033 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:26.033 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:26.293 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:26.293 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:26.293 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:26.293 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:26.293 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:26.553 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:26.553 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:26.553 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:26.813 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:26.813 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:26.813 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:26.813 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:27.072 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:27.072 10:08:43 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:27.072 10:08:43 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:27.072 10:08:43 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:27.072 10:08:43 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:27.072 10:08:43 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.072 10:08:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:27.072 10:08:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.006 10:08:45 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:29.006 00:35:29.006 real 0m37.827s 00:35:29.006 user 1m1.611s 00:35:29.006 sys 0m9.115s 00:35:29.006 10:08:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:29.006 10:08:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:29.006 ************************************ 00:35:29.006 END TEST nvmf_abort_qd_sizes 00:35:29.006 ************************************ 00:35:29.264 10:08:45 -- common/autotest_common.sh@1142 -- # return 0 00:35:29.264 10:08:45 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:29.264 10:08:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:29.264 10:08:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:29.264 10:08:45 -- common/autotest_common.sh@10 -- # set +x 00:35:29.264 ************************************ 00:35:29.264 START TEST keyring_file 00:35:29.264 ************************************ 00:35:29.264 10:08:45 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:29.264 * Looking for test storage... 00:35:29.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:29.264 10:08:45 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:29.264 10:08:45 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:29.264 10:08:45 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:29.265 10:08:45 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:29.265 10:08:45 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:29.265 10:08:45 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:29.265 10:08:45 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.265 10:08:45 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.265 10:08:45 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.265 10:08:45 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:29.265 10:08:45 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@47 -- # : 0 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nNqiHXMlhz 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nNqiHXMlhz 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nNqiHXMlhz 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nNqiHXMlhz 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OQzUFPxFME 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:29.265 10:08:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OQzUFPxFME 00:35:29.265 10:08:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OQzUFPxFME 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.OQzUFPxFME 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@30 -- # tgtpid=2082723 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:29.265 10:08:45 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2082723 00:35:29.265 10:08:45 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2082723 ']' 00:35:29.265 10:08:45 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.265 10:08:45 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:29.265 10:08:45 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.265 10:08:45 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:29.265 10:08:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:29.265 [2024-07-15 10:08:46.033127] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:35:29.265 [2024-07-15 10:08:46.033217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082723 ] 00:35:29.525 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.525 [2024-07-15 10:08:46.069700] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:29.525 [2024-07-15 10:08:46.097244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.525 [2024-07-15 10:08:46.188223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.783 10:08:46 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:29.783 10:08:46 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:35:29.783 10:08:46 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:29.783 10:08:46 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.783 10:08:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:29.783 [2024-07-15 10:08:46.468514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.783 null0 00:35:29.784 [2024-07-15 10:08:46.500537] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:29.784 [2024-07-15 10:08:46.501029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:29.784 [2024-07-15 10:08:46.508556] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.784 10:08:46 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:29.784 [2024-07-15 10:08:46.520578] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:29.784 request: 00:35:29.784 { 00:35:29.784 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:29.784 "secure_channel": false, 00:35:29.784 "listen_address": { 00:35:29.784 "trtype": "tcp", 00:35:29.784 "traddr": "127.0.0.1", 00:35:29.784 "trsvcid": "4420" 00:35:29.784 }, 00:35:29.784 "method": "nvmf_subsystem_add_listener", 00:35:29.784 "req_id": 1 00:35:29.784 } 00:35:29.784 Got JSON-RPC error response 00:35:29.784 response: 00:35:29.784 { 00:35:29.784 "code": -32602, 00:35:29.784 "message": "Invalid parameters" 00:35:29.784 } 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:29.784 10:08:46 keyring_file -- keyring/file.sh@46 -- # bperfpid=2082729 00:35:29.784 10:08:46 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2082729 /var/tmp/bperf.sock 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2082729 ']' 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:29.784 10:08:46 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:29.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:29.784 10:08:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:30.043 [2024-07-15 10:08:46.571692] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:35:30.043 [2024-07-15 10:08:46.571778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082729 ] 00:35:30.043 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.043 [2024-07-15 10:08:46.603733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:30.043 [2024-07-15 10:08:46.635146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.043 [2024-07-15 10:08:46.725447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.301 10:08:46 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:30.301 10:08:46 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:35:30.301 10:08:46 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nNqiHXMlhz 00:35:30.301 10:08:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nNqiHXMlhz 00:35:30.560 10:08:47 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OQzUFPxFME 00:35:30.560 10:08:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OQzUFPxFME 00:35:30.560 10:08:47 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:35:30.560 10:08:47 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:35:30.560 10:08:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:30.560 10:08:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:30.560 10:08:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:30.818 10:08:47 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.nNqiHXMlhz == \/\t\m\p\/\t\m\p\.\n\N\q\i\H\X\M\l\h\z ]] 00:35:30.818 10:08:47 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:35:30.818 10:08:47 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:30.818 10:08:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:30.818 10:08:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:30.818 10:08:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:31.075 10:08:47 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.OQzUFPxFME == \/\t\m\p\/\t\m\p\.\O\Q\z\U\F\P\x\F\M\E ]] 00:35:31.075 10:08:47 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:35:31.075 10:08:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:31.075 10:08:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:31.075 10:08:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:31.075 10:08:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:31.075 10:08:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:31.331 10:08:48 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:31.331 10:08:48 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:35:31.331 10:08:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:31.331 10:08:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:31.331 10:08:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:31.331 10:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:31.331 10:08:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:31.589 10:08:48 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:31.589 10:08:48 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:31.589 10:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:31.847 [2024-07-15 10:08:48.541323] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:31.847 nvme0n1 00:35:31.847 10:08:48 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:35:31.847 10:08:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:31.847 10:08:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:31.847 10:08:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:31.847 10:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:31.847 10:08:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:32.106 10:08:48 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:35:32.106 10:08:48 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:35:32.106 10:08:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:32.106 10:08:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:32.106 10:08:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:32.106 10:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:32.106 10:08:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:32.364 10:08:49 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:35:32.364 10:08:49 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:32.624 Running I/O for 1 seconds... 00:35:33.561 00:35:33.561 Latency(us) 00:35:33.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.561 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:33.561 nvme0n1 : 1.01 5022.84 19.62 0.00 0.00 25362.99 3616.62 32039.82 00:35:33.561 =================================================================================================================== 00:35:33.561 Total : 5022.84 19.62 0.00 0.00 25362.99 3616.62 32039.82 00:35:33.561 0 00:35:33.562 10:08:50 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:33.562 10:08:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:33.819 10:08:50 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:35:33.819 10:08:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:33.819 10:08:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:33.819 10:08:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.819 10:08:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.819 10:08:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:34.077 10:08:50 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:35:34.077 10:08:50 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:35:34.077 10:08:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:34.077 10:08:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.077 10:08:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.077 10:08:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.077 10:08:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:34.335 10:08:51 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:34.335 10:08:51 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:34.335 10:08:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:34.335 10:08:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:34.335 10:08:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:34.335 10:08:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:34.335 10:08:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:34.335 10:08:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:34.335 10:08:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:34.335 10:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:34.594 [2024-07-15 10:08:51.242717] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:34.594 [2024-07-15 10:08:51.242749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149f7b0 (107): Transport endpoint is not connected 00:35:34.594 [2024-07-15 10:08:51.243740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149f7b0 (9): Bad file descriptor 00:35:34.594 [2024-07-15 10:08:51.244738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:34.594 [2024-07-15 10:08:51.244757] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:34.594 [2024-07-15 10:08:51.244770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:34.594 request: 00:35:34.594 { 00:35:34.594 "name": "nvme0", 00:35:34.594 "trtype": "tcp", 00:35:34.594 "traddr": "127.0.0.1", 00:35:34.594 "adrfam": "ipv4", 00:35:34.594 "trsvcid": "4420", 00:35:34.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:34.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:34.594 "prchk_reftag": false, 00:35:34.594 "prchk_guard": false, 00:35:34.594 "hdgst": false, 00:35:34.594 "ddgst": false, 00:35:34.594 "psk": "key1", 00:35:34.594 "method": "bdev_nvme_attach_controller", 00:35:34.594 "req_id": 1 00:35:34.594 } 00:35:34.594 Got JSON-RPC error response 00:35:34.594 response: 00:35:34.594 { 00:35:34.594 "code": -5, 00:35:34.594 "message": "Input/output error" 00:35:34.594 } 00:35:34.594 10:08:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:34.594 10:08:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:34.594 10:08:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:34.594 10:08:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:34.594 10:08:51 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:35:34.594 10:08:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:34.594 10:08:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.594 10:08:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.594 10:08:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:34.594 10:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.853 10:08:51 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:35:34.853 10:08:51 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:35:34.853 10:08:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:34.853 10:08:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.853 10:08:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.853 10:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.853 10:08:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:35.111 10:08:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:35.111 10:08:51 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:35:35.111 10:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:35.369 10:08:52 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:35:35.369 10:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:35.627 10:08:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:35:35.627 10:08:52 keyring_file -- keyring/file.sh@77 -- # jq length 00:35:35.627 10:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:35.886 10:08:52 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:35:35.886 10:08:52 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.nNqiHXMlhz 00:35:35.886 10:08:52 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nNqiHXMlhz 00:35:35.886 10:08:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:35.886 10:08:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nNqiHXMlhz 00:35:35.886 10:08:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:35.886 10:08:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:35.886 10:08:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:35.886 10:08:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:35.886 10:08:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nNqiHXMlhz 00:35:35.886 10:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nNqiHXMlhz 00:35:36.144 [2024-07-15 10:08:52.733667] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nNqiHXMlhz': 0100660 00:35:36.144 [2024-07-15 10:08:52.733712] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:36.144 request: 00:35:36.144 { 00:35:36.144 "name": "key0", 00:35:36.144 "path": "/tmp/tmp.nNqiHXMlhz", 00:35:36.144 "method": "keyring_file_add_key", 00:35:36.144 "req_id": 1 00:35:36.144 } 00:35:36.144 Got JSON-RPC error response 00:35:36.144 response: 00:35:36.144 { 00:35:36.144 "code": -1, 00:35:36.144 "message": "Operation not permitted" 00:35:36.144 } 00:35:36.144 10:08:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:36.144 10:08:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:36.144 10:08:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:36.144 10:08:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:36.144 10:08:52 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.nNqiHXMlhz 00:35:36.144 10:08:52 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nNqiHXMlhz 00:35:36.144 10:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nNqiHXMlhz 00:35:36.403 10:08:52 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.nNqiHXMlhz 00:35:36.403 10:08:52 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:35:36.403 10:08:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:36.403 10:08:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.403 10:08:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.403 10:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.403 10:08:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.660 10:08:53 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:35:36.660 10:08:53 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:36.660 10:08:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:35:36.660 10:08:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:36.660 10:08:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:36.660 10:08:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:36.660 10:08:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:36.660 10:08:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:36.660 10:08:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:36.660 10:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:36.917 [2024-07-15 10:08:53.463644] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nNqiHXMlhz': No such file or directory 00:35:36.917 [2024-07-15 10:08:53.463683] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:36.917 [2024-07-15 10:08:53.463714] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:36.917 [2024-07-15 10:08:53.463727] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:36.917 [2024-07-15 10:08:53.463741] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:36.917 request: 00:35:36.917 { 00:35:36.917 "name": "nvme0", 00:35:36.917 "trtype": "tcp", 00:35:36.917 "traddr": "127.0.0.1", 00:35:36.917 "adrfam": "ipv4", 00:35:36.917 "trsvcid": "4420", 00:35:36.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:36.917 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:36.917 "prchk_reftag": false, 00:35:36.917 "prchk_guard": false, 00:35:36.917 "hdgst": false, 00:35:36.917 "ddgst": false, 00:35:36.917 "psk": "key0", 00:35:36.917 "method": "bdev_nvme_attach_controller", 00:35:36.917 "req_id": 1 00:35:36.917 } 00:35:36.917 Got JSON-RPC error response 00:35:36.917 response: 00:35:36.917 { 00:35:36.917 "code": -19, 00:35:36.917 "message": "No such device" 00:35:36.917 } 00:35:36.917 10:08:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:35:36.917 10:08:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:36.917 10:08:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:36.917 10:08:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:36.917 10:08:53 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:35:36.917 10:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:37.175 10:08:53 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:37.175 10:08:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:37.175 10:08:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:37.175 10:08:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:37.175 10:08:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:37.175 10:08:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:37.175 10:08:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mCC1EpF8i4 00:35:37.175 10:08:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:37.175 10:08:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:37.175 10:08:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:37.175 10:08:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:37.175 10:08:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:37.175 10:08:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:37.175 10:08:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:37.175 10:08:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mCC1EpF8i4 00:35:37.175 10:08:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mCC1EpF8i4 00:35:37.175 10:08:53 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.mCC1EpF8i4 00:35:37.175 10:08:53 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mCC1EpF8i4 00:35:37.175 10:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mCC1EpF8i4 00:35:37.432 10:08:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:37.432 10:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:37.690 nvme0n1 00:35:37.690 10:08:54 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:35:37.690 10:08:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:37.690 10:08:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.690 10:08:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.690 10:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.690 10:08:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:37.947 10:08:54 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:35:37.947 10:08:54 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:35:37.947 10:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:38.206 10:08:54 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:35:38.206 10:08:54 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:35:38.206 10:08:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.206 10:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.206 10:08:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:38.463 10:08:55 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:35:38.463 10:08:55 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:35:38.463 10:08:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:38.463 10:08:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.463 10:08:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.463 10:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.463 10:08:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:38.721 10:08:55 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:35:38.721 10:08:55 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:38.721 10:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:38.978 10:08:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:35:38.978 10:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.978 10:08:55 keyring_file -- keyring/file.sh@104 -- # jq length 00:35:39.236 10:08:55 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:35:39.237 10:08:55 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mCC1EpF8i4 00:35:39.237 10:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mCC1EpF8i4 00:35:39.494 10:08:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OQzUFPxFME 00:35:39.494 10:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OQzUFPxFME 00:35:39.751 10:08:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:39.751 10:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.008 nvme0n1 00:35:40.008 10:08:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:35:40.008 10:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:40.267 10:08:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:35:40.267 "subsystems": [ 00:35:40.267 { 00:35:40.267 "subsystem": "keyring", 00:35:40.267 "config": [ 00:35:40.267 { 00:35:40.267 "method": "keyring_file_add_key", 00:35:40.267 "params": { 00:35:40.267 "name": "key0", 00:35:40.267 "path": "/tmp/tmp.mCC1EpF8i4" 00:35:40.267 } 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "method": "keyring_file_add_key", 00:35:40.267 "params": { 00:35:40.267 "name": "key1", 00:35:40.267 "path": "/tmp/tmp.OQzUFPxFME" 00:35:40.267 } 00:35:40.267 } 00:35:40.267 ] 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "subsystem": "iobuf", 00:35:40.267 "config": [ 00:35:40.267 { 00:35:40.267 "method": "iobuf_set_options", 00:35:40.267 "params": { 00:35:40.267 "small_pool_count": 8192, 00:35:40.267 "large_pool_count": 1024, 00:35:40.267 "small_bufsize": 8192, 00:35:40.267 "large_bufsize": 135168 00:35:40.267 } 00:35:40.267 } 00:35:40.267 ] 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "subsystem": "sock", 00:35:40.267 "config": [ 00:35:40.267 { 00:35:40.267 "method": "sock_set_default_impl", 00:35:40.267 "params": { 00:35:40.267 "impl_name": "posix" 00:35:40.267 } 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "method": "sock_impl_set_options", 00:35:40.267 "params": { 00:35:40.267 "impl_name": "ssl", 00:35:40.267 "recv_buf_size": 4096, 00:35:40.267 "send_buf_size": 4096, 00:35:40.267 "enable_recv_pipe": true, 00:35:40.267 "enable_quickack": false, 00:35:40.267 "enable_placement_id": 0, 00:35:40.267 "enable_zerocopy_send_server": true, 00:35:40.267 "enable_zerocopy_send_client": false, 00:35:40.267 "zerocopy_threshold": 0, 00:35:40.267 "tls_version": 0, 00:35:40.267 "enable_ktls": false 00:35:40.267 } 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "method": "sock_impl_set_options", 00:35:40.267 "params": { 00:35:40.267 "impl_name": "posix", 00:35:40.267 "recv_buf_size": 2097152, 00:35:40.267 "send_buf_size": 2097152, 00:35:40.267 "enable_recv_pipe": true, 00:35:40.267 "enable_quickack": false, 00:35:40.267 "enable_placement_id": 0, 00:35:40.267 "enable_zerocopy_send_server": true, 00:35:40.267 "enable_zerocopy_send_client": false, 00:35:40.267 "zerocopy_threshold": 0, 00:35:40.267 "tls_version": 0, 00:35:40.267 "enable_ktls": false 00:35:40.267 } 00:35:40.267 } 00:35:40.267 ] 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "subsystem": "vmd", 00:35:40.267 "config": [] 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "subsystem": "accel", 00:35:40.267 "config": [ 00:35:40.267 { 00:35:40.267 "method": "accel_set_options", 00:35:40.267 "params": { 00:35:40.267 "small_cache_size": 128, 00:35:40.267 "large_cache_size": 16, 00:35:40.267 "task_count": 2048, 00:35:40.267 "sequence_count": 2048, 00:35:40.267 "buf_count": 2048 00:35:40.267 } 00:35:40.267 } 00:35:40.267 ] 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "subsystem": "bdev", 00:35:40.267 "config": [ 00:35:40.267 { 00:35:40.267 "method": "bdev_set_options", 00:35:40.267 "params": { 00:35:40.267 "bdev_io_pool_size": 65535, 00:35:40.267 "bdev_io_cache_size": 256, 00:35:40.267 "bdev_auto_examine": true, 00:35:40.267 "iobuf_small_cache_size": 128, 00:35:40.267 "iobuf_large_cache_size": 16 00:35:40.267 } 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "method": "bdev_raid_set_options", 00:35:40.267 "params": { 00:35:40.267 "process_window_size_kb": 1024 00:35:40.267 } 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "method": "bdev_iscsi_set_options", 00:35:40.267 "params": { 00:35:40.267 "timeout_sec": 30 00:35:40.267 } 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "method": "bdev_nvme_set_options", 00:35:40.267 "params": { 00:35:40.267 "action_on_timeout": "none", 00:35:40.267 "timeout_us": 0, 00:35:40.267 "timeout_admin_us": 0, 00:35:40.267 "keep_alive_timeout_ms": 10000, 00:35:40.267 "arbitration_burst": 0, 00:35:40.267 "low_priority_weight": 0, 00:35:40.267 "medium_priority_weight": 0, 00:35:40.267 "high_priority_weight": 0, 00:35:40.267 "nvme_adminq_poll_period_us": 10000, 00:35:40.267 "nvme_ioq_poll_period_us": 0, 00:35:40.267 "io_queue_requests": 512, 00:35:40.267 "delay_cmd_submit": true, 00:35:40.267 "transport_retry_count": 4, 00:35:40.267 "bdev_retry_count": 3, 00:35:40.267 "transport_ack_timeout": 0, 00:35:40.267 "ctrlr_loss_timeout_sec": 0, 00:35:40.267 "reconnect_delay_sec": 0, 00:35:40.267 "fast_io_fail_timeout_sec": 0, 00:35:40.267 "disable_auto_failback": false, 00:35:40.267 "generate_uuids": false, 00:35:40.267 "transport_tos": 0, 00:35:40.267 "nvme_error_stat": false, 00:35:40.267 "rdma_srq_size": 0, 00:35:40.267 "io_path_stat": false, 00:35:40.267 "allow_accel_sequence": false, 00:35:40.267 "rdma_max_cq_size": 0, 00:35:40.267 "rdma_cm_event_timeout_ms": 0, 00:35:40.267 "dhchap_digests": [ 00:35:40.267 "sha256", 00:35:40.267 "sha384", 00:35:40.267 "sha512" 00:35:40.267 ], 00:35:40.267 "dhchap_dhgroups": [ 00:35:40.267 "null", 00:35:40.267 "ffdhe2048", 00:35:40.267 "ffdhe3072", 00:35:40.267 "ffdhe4096", 00:35:40.267 "ffdhe6144", 00:35:40.267 "ffdhe8192" 00:35:40.267 ] 00:35:40.267 } 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "method": "bdev_nvme_attach_controller", 00:35:40.267 "params": { 00:35:40.267 "name": "nvme0", 00:35:40.267 "trtype": "TCP", 00:35:40.267 "adrfam": "IPv4", 00:35:40.267 "traddr": "127.0.0.1", 00:35:40.267 "trsvcid": "4420", 00:35:40.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.267 "prchk_reftag": false, 00:35:40.267 "prchk_guard": false, 00:35:40.267 "ctrlr_loss_timeout_sec": 0, 00:35:40.267 "reconnect_delay_sec": 0, 00:35:40.267 "fast_io_fail_timeout_sec": 0, 00:35:40.267 "psk": "key0", 00:35:40.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.267 "hdgst": false, 00:35:40.267 "ddgst": false 00:35:40.267 } 00:35:40.267 }, 00:35:40.267 { 00:35:40.267 "method": "bdev_nvme_set_hotplug", 00:35:40.268 "params": { 00:35:40.268 "period_us": 100000, 00:35:40.268 "enable": false 00:35:40.268 } 00:35:40.268 }, 00:35:40.268 { 00:35:40.268 "method": "bdev_wait_for_examine" 00:35:40.268 } 00:35:40.268 ] 00:35:40.268 }, 00:35:40.268 { 00:35:40.268 "subsystem": "nbd", 00:35:40.268 "config": [] 00:35:40.268 } 00:35:40.268 ] 00:35:40.268 }' 00:35:40.268 10:08:56 keyring_file -- keyring/file.sh@114 -- # killprocess 2082729 00:35:40.268 10:08:56 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2082729 ']' 00:35:40.268 10:08:56 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2082729 00:35:40.268 10:08:56 keyring_file -- common/autotest_common.sh@953 -- # uname 00:35:40.268 10:08:56 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:40.268 10:08:56 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2082729 00:35:40.268 10:08:56 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:40.268 10:08:56 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:40.268 10:08:56 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2082729' 00:35:40.268 killing process with pid 2082729 00:35:40.268 10:08:56 keyring_file -- common/autotest_common.sh@967 -- # kill 2082729 00:35:40.268 Received shutdown signal, test time was about 1.000000 seconds 00:35:40.268 00:35:40.268 Latency(us) 00:35:40.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.268 =================================================================================================================== 00:35:40.268 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:40.268 10:08:56 keyring_file -- common/autotest_common.sh@972 -- # wait 2082729 00:35:40.527 10:08:57 keyring_file -- keyring/file.sh@117 -- # bperfpid=2084184 00:35:40.527 10:08:57 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2084184 /var/tmp/bperf.sock 00:35:40.527 10:08:57 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2084184 ']' 00:35:40.527 10:08:57 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:40.527 10:08:57 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:40.527 10:08:57 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:40.527 10:08:57 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:35:40.527 "subsystems": [ 00:35:40.527 { 00:35:40.527 "subsystem": "keyring", 00:35:40.527 "config": [ 00:35:40.527 { 00:35:40.527 "method": "keyring_file_add_key", 00:35:40.527 "params": { 00:35:40.527 "name": "key0", 00:35:40.527 "path": "/tmp/tmp.mCC1EpF8i4" 00:35:40.527 } 00:35:40.527 }, 00:35:40.527 { 00:35:40.527 "method": "keyring_file_add_key", 00:35:40.527 "params": { 00:35:40.527 "name": "key1", 00:35:40.527 "path": "/tmp/tmp.OQzUFPxFME" 00:35:40.527 } 00:35:40.527 } 00:35:40.527 ] 00:35:40.527 }, 00:35:40.527 { 00:35:40.527 "subsystem": "iobuf", 00:35:40.527 "config": [ 00:35:40.527 { 00:35:40.527 "method": "iobuf_set_options", 00:35:40.527 "params": { 00:35:40.527 "small_pool_count": 8192, 00:35:40.527 "large_pool_count": 1024, 00:35:40.527 "small_bufsize": 8192, 00:35:40.527 "large_bufsize": 135168 00:35:40.528 } 00:35:40.528 } 00:35:40.528 ] 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "subsystem": "sock", 00:35:40.528 "config": [ 00:35:40.528 { 00:35:40.528 "method": "sock_set_default_impl", 00:35:40.528 "params": { 00:35:40.528 "impl_name": "posix" 00:35:40.528 } 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "method": "sock_impl_set_options", 00:35:40.528 "params": { 00:35:40.528 "impl_name": "ssl", 00:35:40.528 "recv_buf_size": 4096, 00:35:40.528 "send_buf_size": 4096, 00:35:40.528 "enable_recv_pipe": true, 00:35:40.528 "enable_quickack": false, 00:35:40.528 "enable_placement_id": 0, 00:35:40.528 "enable_zerocopy_send_server": true, 00:35:40.528 "enable_zerocopy_send_client": false, 00:35:40.528 "zerocopy_threshold": 0, 00:35:40.528 "tls_version": 0, 00:35:40.528 "enable_ktls": false 00:35:40.528 } 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "method": "sock_impl_set_options", 00:35:40.528 "params": { 00:35:40.528 "impl_name": "posix", 00:35:40.528 "recv_buf_size": 2097152, 00:35:40.528 "send_buf_size": 2097152, 00:35:40.528 "enable_recv_pipe": true, 00:35:40.528 "enable_quickack": false, 00:35:40.528 "enable_placement_id": 0, 00:35:40.528 "enable_zerocopy_send_server": true, 00:35:40.528 "enable_zerocopy_send_client": false, 00:35:40.528 "zerocopy_threshold": 0, 00:35:40.528 "tls_version": 0, 00:35:40.528 "enable_ktls": false 00:35:40.528 } 00:35:40.528 } 00:35:40.528 ] 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "subsystem": "vmd", 00:35:40.528 "config": [] 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "subsystem": "accel", 00:35:40.528 "config": [ 00:35:40.528 { 00:35:40.528 "method": "accel_set_options", 00:35:40.528 "params": { 00:35:40.528 "small_cache_size": 128, 00:35:40.528 "large_cache_size": 16, 00:35:40.528 "task_count": 2048, 00:35:40.528 "sequence_count": 2048, 00:35:40.528 "buf_count": 2048 00:35:40.528 } 00:35:40.528 } 00:35:40.528 ] 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "subsystem": "bdev", 00:35:40.528 "config": [ 00:35:40.528 { 00:35:40.528 "method": "bdev_set_options", 00:35:40.528 "params": { 00:35:40.528 "bdev_io_pool_size": 65535, 00:35:40.528 "bdev_io_cache_size": 256, 00:35:40.528 "bdev_auto_examine": true, 00:35:40.528 "iobuf_small_cache_size": 128, 00:35:40.528 "iobuf_large_cache_size": 16 00:35:40.528 } 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "method": "bdev_raid_set_options", 00:35:40.528 "params": { 00:35:40.528 "process_window_size_kb": 1024 00:35:40.528 } 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "method": "bdev_iscsi_set_options", 00:35:40.528 "params": { 00:35:40.528 "timeout_sec": 30 00:35:40.528 } 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "method": "bdev_nvme_set_options", 00:35:40.528 "params": { 00:35:40.528 "action_on_timeout": "none", 00:35:40.528 "timeout_us": 0, 00:35:40.528 "timeout_admin_us": 0, 00:35:40.528 "keep_alive_timeout_ms": 10000, 00:35:40.528 "arbitration_burst": 0, 00:35:40.528 "low_priority_weight": 0, 00:35:40.528 "medium_priority_weight": 0, 00:35:40.528 "high_priority_weight": 0, 00:35:40.528 "nvme_adminq_poll_period_us": 10000, 00:35:40.528 "nvme_ioq_poll_period_us": 0, 00:35:40.528 "io_queue_requests": 512, 00:35:40.528 "delay_cmd_submit": true, 00:35:40.528 "transport_retry_count": 4, 00:35:40.528 "bdev_retry_count": 3, 00:35:40.528 "transport_ack_timeout": 0, 00:35:40.528 "ctrlr_loss_timeout_sec": 0, 00:35:40.528 "reconnect_delay_sec": 0, 00:35:40.528 "fast_io_fail_timeout_sec": 0, 00:35:40.528 "disable_auto_failback": false, 00:35:40.528 "generate_uuids": false, 00:35:40.528 "transport_tos": 0, 00:35:40.528 "nvme_error_stat": false, 00:35:40.528 "rdma_srq_size": 0, 00:35:40.528 "io_path_stat": false, 00:35:40.528 "allow_accel_sequence": false, 00:35:40.528 "rdma_max_cq_size": 0, 00:35:40.528 "rdma_cm_event_timeout_ms": 0, 00:35:40.528 "dhchap_digests": [ 00:35:40.528 "sha256", 00:35:40.528 "sha384", 00:35:40.528 "sha512" 00:35:40.528 ], 00:35:40.528 "dhchap_dhgroups": [ 00:35:40.528 "null", 00:35:40.528 "ffdhe2048", 00:35:40.528 "ffdhe3072", 00:35:40.528 "ffdhe4096", 00:35:40.528 "ffdhe6144", 00:35:40.528 "ffdhe8192" 00:35:40.528 ] 00:35:40.528 } 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "method": "bdev_nvme_attach_controller", 00:35:40.528 "params": { 00:35:40.528 "name": "nvme0", 00:35:40.528 "trtype": "TCP", 00:35:40.528 "adrfam": "IPv4", 00:35:40.528 "traddr": "127.0.0.1", 00:35:40.528 "trsvcid": "4420", 00:35:40.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.528 "prchk_reftag": false, 00:35:40.528 "prchk_guard": false, 00:35:40.528 "ctrlr_loss_timeout_sec": 0, 00:35:40.528 "reconnect_delay_sec": 0, 00:35:40.528 "fast_io_fail_timeout_sec": 0, 00:35:40.528 "psk": "key0", 00:35:40.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.528 "hdgst": false, 00:35:40.528 "ddgst": false 00:35:40.528 } 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "method": "bdev_nvme_set_hotplug", 00:35:40.528 "params": { 00:35:40.528 "period_us": 100000, 00:35:40.528 "enable": false 00:35:40.528 } 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "method": "bdev_wait_for_examine" 00:35:40.528 } 00:35:40.528 ] 00:35:40.528 }, 00:35:40.528 { 00:35:40.528 "subsystem": "nbd", 00:35:40.528 "config": [] 00:35:40.528 } 00:35:40.528 ] 00:35:40.528 }' 00:35:40.528 10:08:57 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:40.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:40.528 10:08:57 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:40.528 10:08:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:40.528 [2024-07-15 10:08:57.238499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:35:40.528 [2024-07-15 10:08:57.238594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084184 ] 00:35:40.528 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.529 [2024-07-15 10:08:57.270459] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:40.529 [2024-07-15 10:08:57.298153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.788 [2024-07-15 10:08:57.384302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.788 [2024-07-15 10:08:57.569784] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:41.741 10:08:58 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:41.741 10:08:58 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:35:41.741 10:08:58 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:35:41.741 10:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.741 10:08:58 keyring_file -- keyring/file.sh@120 -- # jq length 00:35:41.741 10:08:58 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:35:41.741 10:08:58 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:35:41.741 10:08:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:41.741 10:08:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.741 10:08:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.741 10:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.741 10:08:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.999 10:08:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:41.999 10:08:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:35:41.999 10:08:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:41.999 10:08:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.999 10:08:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.999 10:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.999 10:08:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:42.257 10:08:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:35:42.257 10:08:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:35:42.257 10:08:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:35:42.257 10:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:42.516 10:08:59 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:35:42.516 10:08:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:42.516 10:08:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.mCC1EpF8i4 /tmp/tmp.OQzUFPxFME 00:35:42.516 10:08:59 keyring_file -- keyring/file.sh@20 -- # killprocess 2084184 00:35:42.516 10:08:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2084184 ']' 00:35:42.516 10:08:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2084184 00:35:42.516 10:08:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:35:42.516 10:08:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:42.516 10:08:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2084184 00:35:42.516 10:08:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:42.516 10:08:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:42.516 10:08:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2084184' 00:35:42.516 killing process with pid 2084184 00:35:42.516 10:08:59 keyring_file -- common/autotest_common.sh@967 -- # kill 2084184 00:35:42.516 Received shutdown signal, test time was about 1.000000 seconds 00:35:42.516 00:35:42.516 Latency(us) 00:35:42.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.516 =================================================================================================================== 00:35:42.516 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:42.516 10:08:59 keyring_file -- common/autotest_common.sh@972 -- # wait 2084184 00:35:42.774 10:08:59 keyring_file -- keyring/file.sh@21 -- # killprocess 2082723 00:35:42.774 10:08:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2082723 ']' 00:35:42.774 10:08:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2082723 00:35:42.774 10:08:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:35:42.774 10:08:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:42.774 10:08:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2082723 00:35:42.774 10:08:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:42.774 10:08:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:42.774 10:08:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2082723' 00:35:42.774 killing process with pid 2082723 00:35:42.774 10:08:59 keyring_file -- common/autotest_common.sh@967 -- # kill 2082723 00:35:42.774 [2024-07-15 10:08:59.455555] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:42.774 10:08:59 keyring_file -- common/autotest_common.sh@972 -- # wait 2082723 00:35:43.340 00:35:43.340 real 0m14.015s 00:35:43.340 user 0m34.779s 00:35:43.340 sys 0m3.203s 00:35:43.340 10:08:59 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:43.340 10:08:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:43.340 ************************************ 00:35:43.340 END TEST keyring_file 00:35:43.340 ************************************ 00:35:43.340 10:08:59 -- common/autotest_common.sh@1142 -- # return 0 00:35:43.340 10:08:59 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:35:43.340 10:08:59 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:43.340 10:08:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:43.340 10:08:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:43.340 10:08:59 -- common/autotest_common.sh@10 -- # set +x 00:35:43.340 ************************************ 00:35:43.340 START TEST keyring_linux 00:35:43.340 ************************************ 00:35:43.340 10:08:59 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:43.340 * Looking for test storage... 00:35:43.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:43.341 10:08:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.341 10:08:59 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.341 10:08:59 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.341 10:08:59 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.341 10:08:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.341 10:08:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.341 10:08:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.341 10:08:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:43.341 10:08:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:43.341 10:08:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:43.341 10:08:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:43.341 10:08:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:43.341 10:08:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:43.341 10:08:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:43.341 10:08:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:43.341 /tmp/:spdk-test:key0 00:35:43.341 10:08:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:43.341 10:08:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:43.341 10:08:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:35:43.341 10:09:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:43.341 10:09:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:43.341 /tmp/:spdk-test:key1 00:35:43.341 10:09:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2084551 00:35:43.341 10:09:00 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:43.341 10:09:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2084551 00:35:43.341 10:09:00 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2084551 ']' 00:35:43.341 10:09:00 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.341 10:09:00 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:43.341 10:09:00 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.341 10:09:00 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:43.341 10:09:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:43.341 [2024-07-15 10:09:00.077035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:35:43.341 [2024-07-15 10:09:00.077124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084551 ] 00:35:43.341 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.341 [2024-07-15 10:09:00.111335] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:43.599 [2024-07-15 10:09:00.140034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.599 [2024-07-15 10:09:00.225664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:35:43.857 10:09:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:43.857 [2024-07-15 10:09:00.469778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:43.857 null0 00:35:43.857 [2024-07-15 10:09:00.501821] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:43.857 [2024-07-15 10:09:00.502341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.857 10:09:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:43.857 746146570 00:35:43.857 10:09:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:43.857 443335049 00:35:43.857 10:09:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2084647 00:35:43.857 10:09:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:43.857 10:09:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2084647 /var/tmp/bperf.sock 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2084647 ']' 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:43.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:43.857 10:09:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:43.857 [2024-07-15 10:09:00.567612] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:35:43.857 [2024-07-15 10:09:00.567686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084647 ] 00:35:43.857 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.857 [2024-07-15 10:09:00.601797] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:43.857 [2024-07-15 10:09:00.631276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.114 [2024-07-15 10:09:00.727643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.114 10:09:00 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:44.114 10:09:00 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:35:44.114 10:09:00 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:44.114 10:09:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:44.370 10:09:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:44.370 10:09:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:44.627 10:09:01 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:44.627 10:09:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:44.884 [2024-07-15 10:09:01.621151] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:45.141 nvme0n1 00:35:45.141 10:09:01 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:45.141 10:09:01 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:45.141 10:09:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:45.141 10:09:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:45.141 10:09:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:45.141 10:09:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.398 10:09:01 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:45.398 10:09:01 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:45.398 10:09:01 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:45.398 10:09:01 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:45.398 10:09:01 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.398 10:09:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.398 10:09:01 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:45.657 10:09:02 keyring_linux -- keyring/linux.sh@25 -- # sn=746146570 00:35:45.657 10:09:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:45.657 10:09:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:45.657 10:09:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 746146570 == \7\4\6\1\4\6\5\7\0 ]] 00:35:45.657 10:09:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 746146570 00:35:45.657 10:09:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:45.657 10:09:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:45.657 Running I/O for 1 seconds... 00:35:46.594 00:35:46.594 Latency(us) 00:35:46.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.594 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:46.594 nvme0n1 : 1.02 5284.92 20.64 0.00 0.00 24008.53 10922.67 33593.27 00:35:46.594 =================================================================================================================== 00:35:46.594 Total : 5284.92 20.64 0.00 0.00 24008.53 10922.67 33593.27 00:35:46.594 0 00:35:46.594 10:09:03 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:46.594 10:09:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:46.852 10:09:03 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:46.852 10:09:03 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:46.852 10:09:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:46.852 10:09:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:46.852 10:09:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:46.852 10:09:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.420 10:09:03 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:47.420 10:09:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:47.420 10:09:03 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:47.420 10:09:03 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:47.420 10:09:03 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:35:47.420 10:09:03 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:47.420 10:09:03 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:35:47.420 10:09:03 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:47.420 10:09:03 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:35:47.420 10:09:03 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:47.420 10:09:03 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:47.420 10:09:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:47.420 [2024-07-15 10:09:04.167398] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:47.420 [2024-07-15 10:09:04.167786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a7690 (107): Transport endpoint is not connected 00:35:47.420 [2024-07-15 10:09:04.168765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a7690 (9): Bad file descriptor 00:35:47.420 [2024-07-15 10:09:04.169763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:47.420 [2024-07-15 10:09:04.169795] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:47.420 [2024-07-15 10:09:04.169820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:47.420 request: 00:35:47.420 { 00:35:47.420 "name": "nvme0", 00:35:47.420 "trtype": "tcp", 00:35:47.420 "traddr": "127.0.0.1", 00:35:47.420 "adrfam": "ipv4", 00:35:47.420 "trsvcid": "4420", 00:35:47.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.420 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:47.420 "prchk_reftag": false, 00:35:47.420 "prchk_guard": false, 00:35:47.420 "hdgst": false, 00:35:47.420 "ddgst": false, 00:35:47.420 "psk": ":spdk-test:key1", 00:35:47.420 "method": "bdev_nvme_attach_controller", 00:35:47.420 "req_id": 1 00:35:47.420 } 00:35:47.420 Got JSON-RPC error response 00:35:47.420 response: 00:35:47.420 { 00:35:47.420 "code": -5, 00:35:47.420 "message": "Input/output error" 00:35:47.421 } 00:35:47.421 10:09:04 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:35:47.421 10:09:04 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:47.421 10:09:04 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:47.421 10:09:04 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@33 -- # sn=746146570 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 746146570 00:35:47.421 1 links removed 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@33 -- # sn=443335049 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 443335049 00:35:47.421 1 links removed 00:35:47.421 10:09:04 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2084647 00:35:47.421 10:09:04 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2084647 ']' 00:35:47.421 10:09:04 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2084647 00:35:47.421 10:09:04 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:35:47.421 10:09:04 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:47.421 10:09:04 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2084647 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2084647' 00:35:47.679 killing process with pid 2084647 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@967 -- # kill 2084647 00:35:47.679 Received shutdown signal, test time was about 1.000000 seconds 00:35:47.679 00:35:47.679 Latency(us) 00:35:47.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.679 =================================================================================================================== 00:35:47.679 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@972 -- # wait 2084647 00:35:47.679 10:09:04 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2084551 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2084551 ']' 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2084551 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2084551 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2084551' 00:35:47.679 killing process with pid 2084551 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@967 -- # kill 2084551 00:35:47.679 10:09:04 keyring_linux -- common/autotest_common.sh@972 -- # wait 2084551 00:35:48.245 00:35:48.245 real 0m4.927s 00:35:48.245 user 0m9.392s 00:35:48.245 sys 0m1.563s 00:35:48.245 10:09:04 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:48.245 10:09:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:48.245 ************************************ 00:35:48.245 END TEST keyring_linux 00:35:48.245 ************************************ 00:35:48.245 10:09:04 -- common/autotest_common.sh@1142 -- # return 0 00:35:48.245 10:09:04 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:35:48.245 10:09:04 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:35:48.245 10:09:04 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:35:48.245 10:09:04 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:35:48.245 10:09:04 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:35:48.245 10:09:04 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:35:48.245 10:09:04 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:35:48.245 10:09:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:48.245 10:09:04 -- common/autotest_common.sh@10 -- # set +x 00:35:48.245 10:09:04 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:35:48.245 10:09:04 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:48.245 10:09:04 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:48.245 10:09:04 -- common/autotest_common.sh@10 -- # set +x 00:35:50.153 INFO: APP EXITING 00:35:50.153 INFO: killing all VMs 00:35:50.153 INFO: killing vhost app 00:35:50.153 INFO: EXIT DONE 00:35:51.089 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:35:51.089 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:51.089 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:51.089 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:51.089 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:51.089 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:51.089 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:51.089 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:51.089 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:51.089 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:51.089 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:51.089 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:51.089 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:51.348 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:51.348 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:51.348 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:51.348 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:35:52.284 Cleaning 00:35:52.284 Removing: /var/run/dpdk/spdk0/config 00:35:52.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:52.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:52.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:52.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:52.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:52.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:52.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:52.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:52.284 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:52.543 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:52.544 Removing: /var/run/dpdk/spdk1/config 00:35:52.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:52.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:52.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:52.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:52.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:52.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:52.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:52.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:52.544 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:52.544 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:52.544 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:52.544 Removing: /var/run/dpdk/spdk2/config 00:35:52.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:52.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:52.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:52.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:52.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:52.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:52.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:52.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:52.544 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:52.544 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:52.544 Removing: /var/run/dpdk/spdk3/config 00:35:52.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:52.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:52.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:52.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:52.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:52.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:52.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:52.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:52.544 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:52.544 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:52.544 Removing: /var/run/dpdk/spdk4/config 00:35:52.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:52.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:52.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:52.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:52.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:52.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:52.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:52.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:52.544 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:52.544 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:52.544 Removing: /dev/shm/bdev_svc_trace.1 00:35:52.544 Removing: /dev/shm/nvmf_trace.0 00:35:52.544 Removing: /dev/shm/spdk_tgt_trace.pid1765360 00:35:52.544 Removing: /var/run/dpdk/spdk0 00:35:52.544 Removing: /var/run/dpdk/spdk1 00:35:52.544 Removing: /var/run/dpdk/spdk2 00:35:52.544 Removing: /var/run/dpdk/spdk3 00:35:52.544 Removing: /var/run/dpdk/spdk4 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1763816 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1764546 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1765360 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1765797 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1766490 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1766626 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1767339 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1767356 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1767600 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1768788 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1769825 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1770023 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1770225 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1770527 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1770715 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1770873 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1771030 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1771209 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1771525 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1773874 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1774040 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1774203 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1774216 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1774589 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1774646 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1774951 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1775079 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1775244 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1775264 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1775516 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1775547 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1775917 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1776078 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1776313 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1776443 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1776583 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1776656 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1776922 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1777077 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1777241 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1777395 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1777667 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1777824 00:35:52.544 Removing: /var/run/dpdk/spdk_pid1778036 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1778347 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1778521 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1778696 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1778854 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1779375 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1779781 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1779945 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1780195 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1780374 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1780536 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1780694 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1780973 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1781128 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1781317 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1781521 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1783446 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1836524 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1839127 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1846589 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1849759 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1852101 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1852627 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1856462 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1860181 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1860215 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1860831 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1861494 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1862084 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1862446 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1862551 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1862693 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1862829 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1862832 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1863484 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1864066 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1864685 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1865081 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1865083 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1865345 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1866225 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1866938 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1872913 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1873185 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1875702 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1879400 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1881463 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1887768 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1892960 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1894208 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1894878 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1905043 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1907644 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1932913 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1935777 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1936955 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1938154 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1938288 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1938426 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1938495 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1938876 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1940183 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1940787 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1941215 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1942823 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1943122 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1943683 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1946063 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1949420 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1952858 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1976330 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1978986 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1982804 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1983700 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1984785 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1987333 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1989665 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1994365 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1994369 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1997130 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1997266 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1997402 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1997781 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1997793 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1998742 00:35:52.803 Removing: /var/run/dpdk/spdk_pid1999996 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2001213 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2002388 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2003574 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2004754 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2008554 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2008889 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2010166 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2010898 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2014608 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2016476 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2019985 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2023798 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2030011 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2034356 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2034358 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2046562 00:35:52.803 Removing: /var/run/dpdk/spdk_pid2046968 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2047377 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2047822 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2048356 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2048765 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2049292 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2049693 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2052068 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2052327 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2056731 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2056787 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2058389 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2063307 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2063412 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2066188 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2067590 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2068985 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2069841 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2071131 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2072006 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2077305 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2077660 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2078050 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2079601 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2079965 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2080280 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2082723 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2082729 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2084184 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2084551 00:35:53.062 Removing: /var/run/dpdk/spdk_pid2084647 00:35:53.062 Clean 00:35:53.062 10:09:09 -- common/autotest_common.sh@1451 -- # return 0 00:35:53.062 10:09:09 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:35:53.062 10:09:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:53.062 10:09:09 -- common/autotest_common.sh@10 -- # set +x 00:35:53.062 10:09:09 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:35:53.062 10:09:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:53.062 10:09:09 -- common/autotest_common.sh@10 -- # set +x 00:35:53.062 10:09:09 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:53.062 10:09:09 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:53.062 10:09:09 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:53.062 10:09:09 -- spdk/autotest.sh@391 -- # hash lcov 00:35:53.062 10:09:09 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:53.062 10:09:09 -- spdk/autotest.sh@393 -- # hostname 00:35:53.062 10:09:09 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:53.321 geninfo: WARNING: invalid characters removed from testname! 00:36:25.412 10:09:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:25.412 10:09:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:27.949 10:09:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:31.243 10:09:47 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:34.534 10:09:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:37.069 10:09:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:40.385 10:09:56 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:40.385 10:09:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.385 10:09:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:40.385 10:09:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.385 10:09:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.385 10:09:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.385 10:09:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.385 10:09:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.385 10:09:56 -- paths/export.sh@5 -- $ export PATH 00:36:40.385 10:09:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.385 10:09:56 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:40.385 10:09:56 -- common/autobuild_common.sh@444 -- $ date +%s 00:36:40.385 10:09:56 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721030996.XXXXXX 00:36:40.385 10:09:56 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721030996.2vsjwX 00:36:40.385 10:09:56 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:36:40.385 10:09:56 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:36:40.385 10:09:56 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:40.385 10:09:56 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:40.385 10:09:56 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:40.385 10:09:56 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:40.385 10:09:56 -- common/autobuild_common.sh@460 -- $ get_config_params 00:36:40.385 10:09:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:36:40.385 10:09:56 -- common/autotest_common.sh@10 -- $ set +x 00:36:40.385 10:09:56 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:40.385 10:09:56 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:36:40.385 10:09:56 -- pm/common@17 -- $ local monitor 00:36:40.385 10:09:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:40.385 10:09:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:40.385 10:09:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:40.385 10:09:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:40.385 10:09:56 -- pm/common@21 -- $ date +%s 00:36:40.385 10:09:56 -- pm/common@21 -- $ date +%s 00:36:40.385 10:09:56 -- pm/common@25 -- $ sleep 1 00:36:40.385 10:09:56 -- pm/common@21 -- $ date +%s 00:36:40.385 10:09:56 -- pm/common@21 -- $ date +%s 00:36:40.385 10:09:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721030996 00:36:40.385 10:09:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721030996 00:36:40.385 10:09:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721030996 00:36:40.385 10:09:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721030996 00:36:40.385 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721030996_collect-vmstat.pm.log 00:36:40.385 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721030996_collect-cpu-load.pm.log 00:36:40.385 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721030996_collect-cpu-temp.pm.log 00:36:40.385 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721030996_collect-bmc-pm.bmc.pm.log 00:36:40.952 10:09:57 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:36:40.952 10:09:57 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:36:40.952 10:09:57 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:40.952 10:09:57 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:40.952 10:09:57 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:40.952 10:09:57 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:40.952 10:09:57 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:40.952 10:09:57 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:40.952 10:09:57 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:40.952 10:09:57 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:40.952 10:09:57 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:40.952 10:09:57 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:40.952 10:09:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:40.952 10:09:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:40.952 10:09:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:40.952 10:09:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:40.952 10:09:57 -- pm/common@44 -- $ pid=2096542 00:36:40.952 10:09:57 -- pm/common@50 -- $ kill -TERM 2096542 00:36:40.952 10:09:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:40.952 10:09:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:40.952 10:09:57 -- pm/common@44 -- $ pid=2096544 00:36:40.952 10:09:57 -- pm/common@50 -- $ kill -TERM 2096544 00:36:40.952 10:09:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:40.952 10:09:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:40.952 10:09:57 -- pm/common@44 -- $ pid=2096546 00:36:40.952 10:09:57 -- pm/common@50 -- $ kill -TERM 2096546 00:36:40.952 10:09:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:40.952 10:09:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:40.952 10:09:57 -- pm/common@44 -- $ pid=2096576 00:36:40.952 10:09:57 -- pm/common@50 -- $ sudo -E kill -TERM 2096576 00:36:40.952 + [[ -n 1664059 ]] 00:36:40.952 + sudo kill 1664059 00:36:40.961 [Pipeline] } 00:36:40.974 [Pipeline] // stage 00:36:40.979 [Pipeline] } 00:36:40.994 [Pipeline] // timeout 00:36:40.998 [Pipeline] } 00:36:41.012 [Pipeline] // catchError 00:36:41.016 [Pipeline] } 00:36:41.030 [Pipeline] // wrap 00:36:41.035 [Pipeline] } 00:36:41.047 [Pipeline] // catchError 00:36:41.073 [Pipeline] stage 00:36:41.075 [Pipeline] { (Epilogue) 00:36:41.088 [Pipeline] catchError 00:36:41.090 [Pipeline] { 00:36:41.101 [Pipeline] echo 00:36:41.102 Cleanup processes 00:36:41.105 [Pipeline] sh 00:36:41.414 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:41.414 2096686 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:36:41.414 2096807 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:41.426 [Pipeline] sh 00:36:41.704 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:41.704 ++ grep -v 'sudo pgrep' 00:36:41.704 ++ awk '{print $1}' 00:36:41.704 + sudo kill -9 2096686 00:36:41.713 [Pipeline] sh 00:36:41.992 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:52.021 [Pipeline] sh 00:36:52.307 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:52.307 Artifacts sizes are good 00:36:52.322 [Pipeline] archiveArtifacts 00:36:52.329 Archiving artifacts 00:36:52.552 [Pipeline] sh 00:36:52.834 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:52.850 [Pipeline] cleanWs 00:36:52.859 [WS-CLEANUP] Deleting project workspace... 00:36:52.859 [WS-CLEANUP] Deferred wipeout is used... 00:36:52.865 [WS-CLEANUP] done 00:36:52.866 [Pipeline] } 00:36:52.886 [Pipeline] // catchError 00:36:52.898 [Pipeline] sh 00:36:53.177 + logger -p user.info -t JENKINS-CI 00:36:53.185 [Pipeline] } 00:36:53.199 [Pipeline] // stage 00:36:53.203 [Pipeline] } 00:36:53.220 [Pipeline] // node 00:36:53.227 [Pipeline] End of Pipeline 00:36:53.258 Finished: SUCCESS